The MySQL (TM) software delivers a very fast, multi-threaded,
multi-user, and robust SQL (Structured Query Language)
database server.
MySQL Server is intended for mission-critical, heavy-load
production systems as well as for embedding into mass-deployed software.
MySQL is a trademark of MySQL AB.
The MySQL software is Dual Licensed. Users can choose to
use the MySQL software as an Open Source/Free Software
product under the terms of the GNU General Public License
(http://www.gnu.org/licenses/) or can purchase a standard
commercial license from MySQL AB.
See section 1.4 MySQL Support and Licensing.
The MySQL web site (http://www.mysql.com/) provides the
latest information about the MySQL software.
The following list describes some sections of particular interest in this manual:
MySQL Database Server,
see section 1.3 What Is MySQL AB?.
MySQL Database Server,
see section 1.2.2 The Main Features of MySQL.
MySQL Database Software to new architectures
or operating systems, see section E Porting to Other Systems.
MySQL Database Server,
see section 3 Tutorial Introduction.
SQL and benchmarking information, see the
benchmarking directory (`sql-bench' in the distribution).
Important:
Reports of errors (often called bugs), as well as questions and comments, should be sent to the mailing list at mysql@lists.mysql.com. See section 1.7.1.3 How to Report Bugs or Problems.
The mysqlbug script should be used to generate bug reports.
For source distributions, the mysqlbug script can be found in the
`scripts' directory. For binary distributions, mysqlbug
can be found in the `bin' directory (`/usr/bin' for the
MySQL-server RPM package).
If you have found a sensitive security bug in MySQL Server,
you should send an e-mail to security@mysql.com.
This is the MySQL reference manual; it documents MySQL
up to Version 3.23.57. Functional changes are always
indicated with reference to the version, so this manual is also suitable
if you are using an older version of the MySQL software
(such as 3.23 or 4.0-production).
There are also references for version 5.0 (development).
Being a reference manual, it does not provide general instruction on
SQL or relational database concepts.
As the MySQL Database Software is under constant development,
the manual is also updated frequently.
The most recent version of this manual is available at
http://www.mysql.com/documentation/ in many different formats,
including HTML, PDF, and Windows HLP versions.
The primary document is the Texinfo file.
The HTML version is produced automatically using a modified version of
texi2html.
The plain text and Info versions are produced with makeinfo.
The PostScript version is produced using texi2dvi and dvips.
The PDF version is produced with pdftex.
If you have a hard time finding information in the manual, you can try our searchable version at http://www.mysql.com/doc/.
If you have any suggestions concerning additions or corrections to this manual, please send them to the documentation team at docs@mysql.com.
This manual was initially written by David Axmark and Michael (Monty) Widenius. It is currently maintained by Michael (Monty) Widenius, Arjen Lentz, and Paul DuBois. For other contributors, see section C Credits.
The copyright (2003) to this manual is owned by the Swedish company
MySQL AB. See section 1.4.2 Copyrights and Licenses Used by MySQL.
This manual uses certain typographical conventions:
constant
mysqladmin works, invoke it with the
--help option.''
When commands are shown that are meant to be executed by a particular
program, the program is indicated by a prompt shown before the command. For
example, shell> indicates a command that you execute from your login
shell, and mysql> indicates a command that you execute from the
mysql client program:
shell> type a shell command here mysql> type a mysql command here
Shell commands are shown using Bourne shell syntax. If you are using a
csh-style shell, you may need to issue commands slightly differently.
For example, the sequence to set an environment variable and run a command
looks like this in Bourne shell syntax:
shell> VARNAME=value some_command
For csh, you would execute the sequence like this:
shell> setenv VARNAME value shell> some_command
Database, table, and column names must often be substituted into commands. To
indicate that such substitution is necessary, this manual uses
db_name, tbl_name, and col_name. For example, you might
see a statement like this:
mysql> SELECT col_name FROM db_name.tbl_name;
This means that if you were to enter a similar statement, you would supply your own database, table, and column names, perhaps like this:
mysql> SELECT author_name FROM biblio_db.author_list;
SQL keywords are not case-sensitive and may be written in uppercase or lowercase. This manual uses uppercase.
In syntax descriptions, square brackets (`[' and `]') are used
to indicate optional words or clauses. For example, in the following
statement, IF EXISTS is optional:
DROP TABLE [IF EXISTS] tbl_name
When a syntax element consists of a number of alternatives, the alternatives are separated by vertical bars (`|'). When one member from a set of choices may be chosen, the alternatives are listed within square brackets (`[' and `]'):
TRIM([[BOTH | LEADING | TRAILING] [remstr] FROM] str)
When one member from a set of choices must be chosen, the alternatives are listed within braces (`{' and `}'):
{DESCRIBE | DESC} tbl_name {col_name | wild}
MySQL, the most popular Open Source SQL database, is
developed, distributed, and supported by MySQL AB. MySQL AB is a
commercial company, founded by the MySQL developers, that builds its business
providing services around the MySQL database.
See section 1.3 What Is MySQL AB?.
The MySQL web site (http://www.mysql.com/)
provides the latest information about MySQL software and
MySQL AB.
MySQL is a database management system.
MySQL Server. Since computers are very good at handling large
amounts of data, database management systems play a central role in computing,
as stand-alone utilities or as parts of other applications.
SQL part of ``MySQL'' stands for ``Structured
Query Language''. SQL is the most common standardised language used to
access databases and is defined by the ANSI/ISO SQL Standard.(The SQL
standard has been evolving since 1986 and several versions exist. In this
manual, ''SQL-92'' refers to the standard released in 1992,
''SQL-99'' refers to the standard released in 1999, and
''SQL:2003'' refers to the version of the standard that is expected
to be released in mid-2003.We use the term ''the SQL standard'' to
mean the current version of the SQL Standard at any time.)
Open Source.
Open Source means that it is possible for anyone to use and modify the software.
Anybody can download the MySQL software from the Internet and use it
without paying anything. If you wish, you may study the source code
and change it to suit your needs. The MySQL software uses the
GPL (GNU General Public License),
http://www.gnu.org/licenses/, to define what you
may and may not do with the software in different situations.
If you feel uncomfortable with the GPL or need to embed
MySQL code into a commercial application you can buy a
commercially licensed version from us.
See section 1.4.3 MySQL Licenses.
MySQL Database Server is very fast, reliable, and easy to use.
If that is what you are looking for, you should give it a try.
MySQL Server also has a practical set of features developed in
close cooperation with our users. You can find a performance comparison
of MySQL Server with other database managers on our benchmark page.
See section 5.1.4 The MySQL Benchmark Suite.
MySQL Server was originally developed to handle large databases
much faster than existing solutions and has been successfully used in
highly demanding production environments for several years. Though
under constant development, MySQL Server today offers a rich and
useful set of functions. Its connectivity, speed, and security make
MySQL Server highly suited for accessing databases on the Internet.
MySQL Database Software is a client/server system that consists
of a multi-threaded SQL server that supports different backends,
several different client programs and libraries, administrative tools,
and a wide range of programming interfaces (APIs).
We also provide MySQL Server as a multi-threaded library which you
can link into your application to get a smaller, faster, easier-to-manage
product.
MySQL Database Server.
The official way to pronounce MySQL is ``My Ess Que Ell'' (not
``my sequel''), but we don't mind if you pronounce it as ``my sequel''
or in some other localised way.
We started out with the intention of using mSQL to connect to our
tables using our own fast low-level (ISAM) routines. However, after some
testing we came to the conclusion that mSQL was not fast enough nor
flexible enough for our needs. This resulted in a new SQL interface to our
database but with almost the same API interface as mSQL. This API was
chosen to ease porting of third-party code.
The derivation of the name MySQL is not clear. Our base
directory and a large number of our libraries and tools have had the prefix
``my'' for well over 10 years. However, co-founder Monty Widenius's daughter
(some years younger) is also named My. Which of the two gave its name to
MySQL is still a mystery, even for us.
The name of the MySQL Dolphin (our logo) is Sakila. Sakila was chosen
by the founders of MySQL AB from a huge list of names suggested by users
in our "Name the Dolphin" contest. The winning name was submitted by
Ambrose Twebaze, an open source software developer from Swaziland, Africa.
According to Ambrose, the name Sakila has its roots in SiSwati, the local
language of Swaziland. Sakila is also the name of a town in Arusha,
Tanzania, near Ambrose's country of origin, Uganda.
The following list describes some of the important characteristics
of the MySQL Database Software. See section 1.5 MySQL 4.0 In A Nutshell.
MySQL code gets tested with Purify
(a commercial memory leakage detector) as well as with Valgrind,
a GPL tool (http://developer.kde.org/~sewardj/).
FLOAT, DOUBLE, CHAR, VARCHAR,
TEXT, BLOB, DATE, TIME, DATETIME,
TIMESTAMP, YEAR, SET, and ENUM types.
See section 6.2 Column Types.
SELECT and WHERE
clauses of queries. For example:
mysql> SELECT CONCAT(first_name, " ", last_name)
-> FROM tbl_name
-> WHERE income/dependents > 10000 AND age > 30;
GROUP BY and
ORDER BY clauses. Support
for group functions (COUNT(),
COUNT(DISTINCT ...),
AVG(), STD(),
SUM(), MAX(), MIN(), and GROUP_CONCAT()).
LEFT OUTER JOIN and RIGHT OUTER JOIN with both standard
SQL and ODBC syntax.
DELETE, INSERT, REPLACE, and UPDATE return
the number of rows that were changed (affected). It is possible to return
the number of rows matched instead by setting a flag when connecting to the
server.
MySQL-specific SHOW command can be used to retrieve
information about databases, tables, and indexes. The EXPLAIN command
can be used to determine how the optimiser resolves a query.
ABS is a valid column name. The only restriction is that for a
function call, no spaces are allowed between the function name and the
`(' that follows it. See section 6.1.7 Is MySQL Picky About Reserved Words?.
MySQL Server with databases that
contain 50 million records. We also know of users that
use MySQL Server with 60,000 tables and about 5,000,000,000 rows.
MySQL Server).
An index may use a prefix of a CHAR or VARCHAR field.
MySQL server using TCP/IP Sockets,
Unix Sockets (Unix), or Named Pipes (NT).
ODBC (Open-DataBase-Connectivity) support for Win32 (with source).
All ODBC 2.5 functions are supported, as are many others. For example, you can use
MS Access to connect to your MySQL server. See section 8.2 MySQL ODBC Support.
MySQL
server is started. To see an example of very advanced sorting, look
at the Czech sorting code. MySQL Server supports many different
character sets that can be specified at compile and runtime.
myisamchk, a very fast utility for table checking,
optimisation, and repair. All of the functionality of myisamchk
is also available through the SQL interface.
See section 4 Database Administration.
MySQL programs can be invoked with the --help or -?
options to obtain online assistance.
This section addresses the questions ``How stable is MySQL Server?'' and ``Can I depend on MySQL Server in this project?'' We will try to clarify these issues and answer some important questions that concern many potential users. The information in this section is based on data gathered from the mailing list, which is very active in identifying problems as well as reporting types of use.
Original code stems back from the early '80s, providing a stable code
base, and the ISAM table format remains backward-compatible.
At TcX, the predecessor of MySQL AB, MySQL code has worked
in projects since mid-1996, without any problems.
When the MySQL Database Software was released to a wider public,
our new users quickly found some pieces of ``untested code''. Each new release
since then has had fewer portability problems (even though each new release
has also had many new features).
Each release of the MySQL Server has been usable. Problems have occurred
only when users try code from the ``gray zones.'' Naturally, new users
don't know what the gray zones are; this section therefore attempts to
document those areas that are currently known.
The descriptions mostly deal with Version 3.23 and 4.0 of MySQL Server.
All known and reported bugs are fixed in the latest version, with the
exception of those listed in the bugs section, which are things that
are design-related. See section 1.8.6 Known Errors and Design Deficiencies in MySQL.
The MySQL Server design is multi-layered with independent modules.
Some of the newer modules are listed here with an indication of how
well-tested each of them is:
MySQL 4.x.
InnoDB tables -- Stable (in 3.23 from 3.23.49)
InnoDB transactional storage engine has been declared
stable in the MySQL 3.23 tree, starting from version 3.23.49.
InnoDB is being used in large, heavy-load production systems.
BDB tables -- Gamma
Berkeley DB code is very stable, but we are still improving
the BDB transactional storage engine interface in
MySQL Server, so it will take some time before this is as well
tested as the other table types.
FULLTEXT -- Beta
MySQL 4.0.
MyODBC 3.51 (uses ODBC SDK 3.51) -- Stable
MyISAM tables -- Gamma
MyISAM storage
engine that checks if the table was closed properly on open and
executes an automatic check/repair of the table if it wasn't.
MyISAM tables in MySQL 4.0 for faster
insert of many rows.
fcntl()). In these cases, you should
run mysqld with the --skip-external-locking flag.
Problems are known to occur on some Linux systems, and on SunOS when
using NFS-mounted filesystems.
MySQL AB provides high-quality support for paying customers,
and the MySQL mailing list usually provides answers to common
questions. Bugs are usually fixed right away with a patch; for serious
bugs, there is almost always a new release.
MySQL Version 3.22 had a 4 GB (4 gigabyte) limit on table size. With the
MyISAM table type in MySQL Version 3.23, the maximum table
size was pushed up to 8 million terabytes (2 ^ 63 bytes).
Note, however, that operating systems have their own file-size limits. Here are some examples:
| Operating System | File-Size Limit |
| Linux-Intel 32 bit | 2 GB, 4GB or more, depends on Linux version |
| Linux-Alpha | 8 TB (?) |
| Solaris 2.5.1 | 2 GB (possible 4GB with patch) |
| Solaris 2.6 | 4 GB (can be changed with flag) |
| Solaris 2.7 Intel | 4 GB |
| Solaris 2.7 UltraSPARC | 512 GB |
On Linux 2.2 you can get tables larger than 2 GB in size by using the LFS patch for the ext2 filesystem. On Linux 2.4 patches also exist for ReiserFS to get support for big files.
In effect, then, the table size for MySQL databases is normally
limited by the operating system.
By default, MySQL tables have a maximum size of about 4 GB. You can
check the maximum table size for a table with the SHOW TABLE STATUS
command or with the myisamchk -dv table_name.
See section 4.5.7 SHOW Syntax.
If you need a table that will be larger than 4 GB in size (and your operating system supports
this), set the AVG_ROW_LENGTH and MAX_ROWS
parameters accordingly when you create your table. See section 6.5.3 CREATE TABLE Syntax. You can
also set these parameters later, with ALTER TABLE. See section 6.5.4 ALTER TABLE Syntax.
If your big table is a read-only table, you could use
myisampack to merge and compress many tables into one.
myisampack usually compresses a table by at least 50%, so you can
have, in effect, much bigger tables. See section 4.7.4 myisampack, The MySQL Compressed Read-only Table Generator.
You can get around the operating system file limit for MyISAM data
files using the RAID option. See section 6.5.3 CREATE TABLE Syntax.
Another solution can be the included MERGE library, which allows
you to handle a collection of identical tables as one.
See section 7.2 MERGE Tables.
The MySQL Server itself has no problems with Year 2000 (Y2K)
compliance:
MySQL Server uses Unix time functions and has no problems with dates
until 2069. All 2-digit years are considered to be in the range
1970 to 2069, which means that if you store 01 in a
YEAR column, MySQL Server treats it as 2001.
MySQL date functions are stored in one file, `sql/time.cc',
and are coded very carefully to be year 2000-safe.
MySQL Version 3.22 and later, the YEAR column type
can store years 0 and 1901 to 2155 in one byte and
display them using two or four digits.
You may run into problems with applications that use MySQL Server
in a way that is not Y2K-safe. For example, many old applications store
or manipulate years using 2-digit values (which are ambiguous) rather than
4-digit values. This problem may be compounded by applications that use
values such as 00 or 99 as ``missing'' value indicators.
Unfortunately, these problems may be difficult to fix because different applications may be written by different programmers, each of whom may use a different set of conventions and date-handling functions.
Here is a simple demonstration illustrating that MySQL Server
doesn't have any problems with dates until the year 2030:
mysql> DROP TABLE IF EXISTS y2k;
Query OK, 0 rows affected (0.01 sec)
mysql> CREATE TABLE y2k (date DATE,
-> date_time DATETIME,
-> time_stamp TIMESTAMP);
Query OK, 0 rows affected (0.00 sec)
mysql> INSERT INTO y2k VALUES
-> ("1998-12-31","1998-12-31 23:59:59",19981231235959),
-> ("1999-01-01","1999-01-01 00:00:00",19990101000000),
-> ("1999-09-09","1999-09-09 23:59:59",19990909235959),
-> ("2000-01-01","2000-01-01 00:00:00",20000101000000),
-> ("2000-02-28","2000-02-28 00:00:00",20000228000000),
-> ("2000-02-29","2000-02-29 00:00:00",20000229000000),
-> ("2000-03-01","2000-03-01 00:00:00",20000301000000),
-> ("2000-12-31","2000-12-31 23:59:59",20001231235959),
-> ("2001-01-01","2001-01-01 00:00:00",20010101000000),
-> ("2004-12-31","2004-12-31 23:59:59",20041231235959),
-> ("2005-01-01","2005-01-01 00:00:00",20050101000000),
-> ("2030-01-01","2030-01-01 00:00:00",20300101000000),
-> ("2050-01-01","2050-01-01 00:00:00",20500101000000);
Query OK, 13 rows affected (0.01 sec)
Records: 13 Duplicates: 0 Warnings: 0
mysql> SELECT * FROM y2k;
+------------+---------------------+----------------+
| date | date_time | time_stamp |
+------------+---------------------+----------------+
| 1998-12-31 | 1998-12-31 23:59:59 | 19981231235959 |
| 1999-01-01 | 1999-01-01 00:00:00 | 19990101000000 |
| 1999-09-09 | 1999-09-09 23:59:59 | 19990909235959 |
| 2000-01-01 | 2000-01-01 00:00:00 | 20000101000000 |
| 2000-02-28 | 2000-02-28 00:00:00 | 20000228000000 |
| 2000-02-29 | 2000-02-29 00:00:00 | 20000229000000 |
| 2000-03-01 | 2000-03-01 00:00:00 | 20000301000000 |
| 2000-12-31 | 2000-12-31 23:59:59 | 20001231235959 |
| 2001-01-01 | 2001-01-01 00:00:00 | 20010101000000 |
| 2004-12-31 | 2004-12-31 23:59:59 | 20041231235959 |
| 2005-01-01 | 2005-01-01 00:00:00 | 20050101000000 |
| 2030-01-01 | 2030-01-01 00:00:00 | 20300101000000 |
| 2050-01-01 | 2050-01-01 00:00:00 | 00000000000000 |
+------------+---------------------+----------------+
13 rows in set (0.00 sec)
This example shows that the DATE and DATETIME data types will not
give any problems with future dates (they handle dates until the year
9999).
The TIMESTAMP data type, which is used to store the current time, supports
values that range from 19700101000000 to 20300101000000 on 32-bit
machines (signed value). On 64-bit machines, TIMESTAMP handles values
up to 2106 (unsigned value).
Even though MySQL Server is Y2K-compliant, it is your responsibility
to provide unambiguous input. See section 6.2.2.1 Y2K Issues and Date Types for MySQL Server's
rules for dealing with ambiguous date input data (data containing 2-digit
year values).
MySQL AB is the company of the MySQL founders and main
developers. MySQL AB was originally established in Sweden by
David Axmark, Allan Larsson, and Michael Monty Widenius.
The developers of the MySQL server are all employed by the company.
We are a virtual organisation with people in a dozen countries around
the world. We communicate extensively over the Net every day with one another
and with our users, supporters, and partners.
We are dedicated to developing the MySQL software and spreading
our database to new users. MySQL AB owns the copyright to the
MySQL source code, the MySQL logo and trademark, and this
manual. See section 1.2 What Is MySQL?.
The MySQL core values show our dedication to MySQL and
Open Source.
We want the MySQL Database Software to be:
MySQL AB and the people at MySQL AB:
Open Source philosophy and support the
Open Source community.
The MySQL web site (http://www.mysql.com/)
provides the latest information about MySQL and MySQL AB.
One of the most common questions we encounter is: ``How can you make a living from something you give away for free?'' This is how.
MySQL AB makes money on support, services, commercial licenses,
and royalties. We use these revenues to fund product development
and to expand the MySQL business.
The company has been profitable since its inception. In October 2001, we accepted venture financing from leading Scandinavian investors and a handful of business angels. This investment is used to solidify our business model and build a basis for sustainable growth.
MySQL AB is run and owned by the founders and main developers of
the MySQL database. The developers are committed to giving support
to customers and other users in order to stay in touch with their needs
and problems. All our support is given by qualified developers. Really
tricky questions are answered by Michael Monty Widenius, principal
author of the MySQL Server.
See section 1.4.1 Support Offered by MySQL AB.
For more information and ordering support at various levels, see http://www.mysql.com/support/ or contact our sales staff at sales@mysql.com.
MySQL AB delivers MySQL and related training worldwide.
We offer both open courses and in-house courses tailored to the
specific needs of your company. MySQL Training is also available
through our partners, the Authorised MySQL Training Centers.
Our training material uses the same example databases used in our
documentation and our sample applications, and is always updated
to reflect the latest MySQL version. Our trainers are backed by
the development team to guarantee the quality of the training and the
continuous development of the course material. This also ensures
that no questions raised during the courses remain unanswered.
Attending our training courses will enable you to achieve your MySQL
application goals. You will also:
MySQL Certification.
If you are interested in our training as a potential participant or as a training partner, please visit the training section at http://www.mysql.com/training/ or contact us at: training@mysql.com.
For details about the MySQL Certification Program, please see
http://www.mysql.com/certification/.
MySQL AB and its Authorised Partners offer consulting
services to users of MySQL Server and to those who embed
MySQL Server in their own software, all over the world.
Our consultants can help you design and tune your databases, construct
efficient queries, tune your platform for optimal performance, resolve
migration issues, set up replication, build robust transactional
applications, and more.
We also help customers embed MySQL Server in their products and
applications for large-scale deployment.
Our consultants work in close collaboration with our development team,
which ensures the technical quality of our professional services.
Consulting assignments range from 2-day power-start sessions to
projects that span weeks and months. Our expertise not only covers
MySQL Server@-it also extends into programming and scripting
languages such as PHP, Perl, and more.
If you are interested in our consulting services or want to become a consulting partner, please visit the consulting section of our web site at http://www.mysql.com/consulting/ or contact our consulting staff at consulting@mysql.com.
The MySQL database is released under the
GNU General Public License (GPL).
This means that the MySQL software can be used free of charge
under the GPL. If you do not want to be bound by the GPL
terms (such as the requirement that your application must also be GPL,
you may purchase a commercial license for the same product
from MySQL AB; see http://www.mysql.com/products/pricing.html.
Since MySQL AB owns the copyright to the MySQL source code,
we are able to employ Dual Licensing, which means that the same
product is available under GPL and under a commercial
license. This does not in any way affect the Open Source
commitment of MySQL AB. For details about when a commercial
license is required, please see section 1.4.3 MySQL Licenses.
We also sell commercial licenses of third-party Open Source GPL
software that adds value to MySQL Server. A good example is the
InnoDB transactional storage engine that offers ACID
support, row-level locking, crash recovery, multi-versioning, foreign
key support, and more. See section 7.5 InnoDB Tables.
MySQL AB has a worldwide partner programme that covers training
courses, consulting and support, publications, plus reselling and
distributing MySQL and related products. MySQL AB Partners
get visibility on the http://www.mysql.com/ web site and the right
to use special versions of the MySQL trademarks to identify their
products and promote their business.
If you are interested in becoming a MySQL AB Partner, please e-mail
partner@mysql.com.
The word MySQL and the MySQL dolphin logo are trademarks of
MySQL AB. See section 1.4.4 MySQL AB Logos and Trademarks.
These trademarks represent a significant value that the MySQL
founders have built over the years.
The MySQL web site (http://www.mysql.com/) is popular among
developers and users. In October 2001, we served 10 million page views.
Our visitors represent a group that makes purchase decisions and
recommendations for both software and hardware. Twelve percent of our
visitors authorise purchase decisions, and only nine percent are not
involved in purchase decisions at all. More than 65% have made one or
more online business purchases within the last half-year, and 70% plan
to make one in the next few months.
The MySQL web site (http://www.mysql.com/)
provides the latest information about MySQL and MySQL AB.
For press services and inquiries not covered in our News releases (http://www.mysql.com/news/), please send an e-mail to press@mysql.com.
If you have a valid support contract with MySQL AB, you will
get timely, precise answers to your technical questions about the
MySQL software. For more information, see section 1.4.1 Support Offered by MySQL AB.
On our web site, see http://www.mysql.com/support/, or send
an e-mail to sales@mysql.com.
For information about MySQL training, please visit the training
section at http://www.mysql.com/training/. If you have
restricted access to the Internet, please contact the MySQL AB
training staff via e-mail at training@mysql.com.
See section 1.3.1.2 Training and Certification.
For information on the MySQL Certification Program, please see
http://www.mysql.com/certification/.
See section 1.3.1.2 Training and Certification.
If you're interested in consulting, please visit the consulting
section of our web site at http://www.mysql.com/consulting/. If you have
restricted access to the Internet, please contact the MySQL AB
consulting staff via e-mail at consulting@mysql.com.
See section 1.3.1.3 Consulting.
Commercial licenses may be purchased online at
https://order.mysql.com/. There you will also find information
on how to fax your purchase order to MySQL AB. More information
about licensing can be found at
http://www.mysql.com/products/pricing.html.
If you have
questions regarding licensing or you want a quote for a high-volume
license deal, please fill in the contact form on our web site
(http://www.mysql.com/) or send an e-mail message
to licensing@mysql.com (for licensing questions) or to
sales@mysql.com (for sales inquiries).
See section 1.4.3 MySQL Licenses.
If you represent a business that is interested in partnering with
MySQL AB, please send an e-mail to partner@mysql.com.
See section 1.3.1.5 Partnering.
For more information on the MySQL trademark policy, refer to
http://www.mysql.com/company/trademark.html or send an e-mail to
trademark@mysql.com.
See section 1.4.4 MySQL AB Logos and Trademarks.
If you are interested in any of the MySQL AB jobs listed in our
jobs section (http://www.mysql.com/company/jobs/),
please send an e-mail to jobs@mysql.com.
Please do not send your CV as an attachment, but rather as plain text
at the end of your e-mail message.
For general discussion among our many users, please direct your attention to the appropriate mailing list. See section 1.7.1 MySQL Mailing Lists.
Reports of errors (often called bugs), as well as questions and
comments, should be sent to the mailing list at
mysql@lists.mysql.com. If you have found a sensitive
security bug in the MySQL Server, please send an e-mail
to security@mysql.com.
See section 1.7.1.3 How to Report Bugs or Problems.
If you have benchmark results that we can publish, please contact us via e-mail at benchmarks@mysql.com.
If you have suggestions concerning additions or corrections to this manual, please send them to the manual team via e-mail at docs@mysql.com.
For questions or comments about the workings or content of the
MySQL web site (http://www.mysql.com/),
please send an e-mail to webmaster@mysql.com.
MySQL AB has a privacy policy, which can be read at
http://www.mysql.com/company/privacy.html.
For any queries regarding this policy, please send an e-mail to
privacy@mysql.com.
For all other inquires, please send an e-mail to info@mysql.com.
This section describes MySQL support and licensing arrangements.
Technical support from MySQL AB means individualised answers
to your unique problems direct from the software engineers who code
the MySQL database engine.
We try to take a broad and inclusive view of technical support. Almost
any problem involving MySQL software is important to us if it's
important to you.
Typically customers seek help on how to get different commands and
utilities to work, remove performance bottlenecks, restore crashed
systems, understand operating system or networking impacts on MySQL,
set up best practices for backup and recovery, utilise APIs, and so on.
Our support covers only the MySQL server and our own utilities,
not third-party products that access the MySQL server, though we
try to help with these where we can.
Detailed information about our various support options is given at http://www.mysql.com/support/, where support contracts can also be ordered online. If you have restricted access to the Internet, please contact our sales staff via e-mail at sales@mysql.com.
Technical support is like life insurance. You can live happily
without it for years, but when your hour arrives it becomes
critically important, yet it's too late to buy it.
If you use MySQL Server for important applications and encounter
sudden difficulties, it may be too time consuming to figure out all the answers
yourself. You may need immediate access to the most experienced
MySQL troubleshooters available, those employed by MySQL AB.
MySQL AB owns the copyright to the MySQL source code,
the MySQL logos and trademarks and this manual.
See section 1.3 What Is MySQL AB?.
Several different licenses are relevant to the MySQL
distribution:
MySQL-specific source in the server, the mysqlclient
library and the client, as well as the GNU readline library
is covered by the GNU General Public License.
See section H GNU General Public License.
The text of this license can be found as the file `COPYING'
in the distribution.
GNU getopt library is covered by the
GNU Lesser General Public License.
See section I GNU Lesser General Public License.
regexp library) are covered
by a Berkeley-style copyright.
MySQL (3.22 and earlier) are subject to a
stricter license
(http://www.mysql.com/products/mypl.html).
See the documentation of the specific version for information.
MySQL reference manual is currently not distributed
under a GPL-style license. Use of the manual is subject to the
following terms:
MySQL AB is required.
For information about how the MySQL licenses work in practice,
please refer to section 1.4.3 MySQL Licenses.
Also see section 1.4.4 MySQL AB Logos and Trademarks.
The MySQL software is released under the
GNU General Public License (GPL),
which is probably the best known Open Source license.
The formal terms of the GPL license can be found at
http://www.gnu.org/licenses/.
See also http://www.gnu.org/licenses/gpl-faq.html and
http://www.gnu.org/philosophy/enforcing-gpl.html.
Since the MySQL software is released under the GPL,
it may often be used for free, but for certain uses you may want
or need to buy commercial licenses from MySQL AB at
https://order.mysql.com/.
See http://www.mysql.com/products/licensing.html for
more information.
Older versions of MySQL (3.22 and earlier) are subject to a
stricter license
(http://www.mysql.com/products/mypl.html).
See the documentation of the specific version for information.
Please note that the use of the MySQL software under commercial
license, GPL, or the old MySQL license does not
automatically give you the right to use MySQL AB trademarks.
See section 1.4.4 MySQL AB Logos and Trademarks.
The GPL license is contagious in the sense that when a program
is linked to a GPL program all the source code for all the parts
of the resulting product must also be released under the GPL.
If you do not follow this GPL requirement, you break the license
terms and forfeit your right to use the GPL program altogether.
You also risk damages.
You need a commercial license:
GPL code from the MySQL
software and don't want the resulting product to be licensed under GPL,
perhaps because you want to build a commercial product or keep the added
non-GPL code closed source for other reasons. When purchasing
commercial licenses, you are not using the MySQL software under
GPL even though it's the same code.
GPL application that only works with the
MySQL software and ship it with the MySQL software. This type
of solution is considered to be linking even if it's done over a network.
MySQL software without providing
the source code as required under the GPL license.
MySQL
database even if you don't formally need a commercial license.
Purchasing support directly from MySQL AB is another good way
of contributing to the development of the MySQL software, with
immediate advantages for you.
See section 1.4.1 Support Offered by MySQL AB.
If you require a license, you will need one for each installation of the
MySQL software. This covers any number of CPUs on a machine, and there
is no artificial limit on the number of clients that connect to the server
in any way.
For commercial licenses, please visit our website at http://www.mysql.com/products/licensing.html. For support contracts, see http://www.mysql.com/support/. If you have special needs or you have restricted access to the Internet, please contact our sales staff via e-mail at sales@mysql.com.
You can use the MySQL software for free under the GPL if
you adhere to the conditions of the GPL.
For additional details, including answers to common questions about the GPL,
see the generic FAQ from the Free Software Foundation at
http://www.gnu.org/licenses/gpl-faq.html.
Common uses of the GPL include:
MySQL
source code under the GPL with your product.
MySQL source code bundled with other
programs that are not linked to or dependent on the MySQL system
for their functionality even if you sell the distribution commercially.
This is called mere aggregation in the GPL license.
MySQL
system, you can use it for free.
MySQL servers for your customers.
We encourage people to use ISPs that have MySQL support,
as this will give them the confidence that their ISP will, in fact,
have the resources to solve any problems they may experience with
the MySQL installation. Even if an ISP does not have
a commercial license for MySQL Server, their customers
should at least be given read access to the source of the MySQL
installation so that the customers can verify that it is correctly patched.
MySQL database software in conjunction with a
web server, you do not need a commercial license (so long as it is not
a product you distribute). This is true even if you run a commercial
web server that uses MySQL Server, because you are not
distributing any part of the MySQL system. However, in this
case we would like you to purchase MySQL support because the
MySQL software is helping your enterprise.
If your use of MySQL database software does not require a commercial
license, we encourage you to purchase support from MySQL AB anyway.
This way you contribute toward MySQL development and also gain
immediate advantages for yourself. See section 1.4.1 Support Offered by MySQL AB.
If you use the MySQL database software in a commercial context
such that you profit by its use, we ask that you further the development
of the MySQL software by purchasing some level of support. We feel
that if the MySQL database helps your business, it is reasonable to
ask that you help MySQL AB.
(Otherwise, if you ask us support questions, you are not only using
for free something into which we've put a lot a work, you're asking
us to provide free support, too.)
Many users of the MySQL database want to display the
MySQL AB dolphin logo on their web sites, books, or
boxed products. We welcome and encourage this, although it should be
noted that the word MySQL and the MySQL dolphin logo
are trademarks of MySQL AB and may only be used as stated in
our trademark policy at
http://www.mysql.com/company/trademark.html.
The MySQL dolphin logo was designed by the Finnish advertising
agency Priority in 2001. The dolphin was chosen as a suitable symbol
for the MySQL database since it is a smart, fast, and lean animal,
effortlessly navigating oceans of data. We also happen to like dolphins.
The original MySQL logo may only be used by representatives of
MySQL AB and by those having a written agreement allowing them
to do so.
We have designed a set of special Conditional Use logos that may be
downloaded from our web site at
http://www.mysql.com/press/logos.html
and used on third-party web sites without written permission from
MySQL AB.
The use of these logos is not entirely unrestricted but, as the name
implies, subject to our trademark policy that is also available on our
web site. You should read through the trademark policy if you plan to
use them. The requirements are basically as follows:
MySQL AB, are the creator and
owner of the site that displays the MySQL trademark.
MySQL AB
or to the value of MySQL AB trademarks. We reserve the right to
revoke the right to use the MySQL AB trademark.
MySQL database under GPL in an
application, your application must be Open Source and must
be able to connect to a MySQL server.
Contact us via e-mail at trademark@mysql.com to inquire about special arrangements to fit your needs.
You need written permission from MySQL AB before using MySQL
logos in the following cases:
MySQL AB logo anywhere except on your web site.
MySQL AB logo except the Conditional Use
logos mentioned previously on web sites or elsewhere.
Due to legal and commercial reasons we monitor the use of MySQL
trademarks on products, books, and other items. We usually require a fee for
displaying MySQL AB logos on commercial products, since we think
it is reasonable that some of the revenue is returned to fund further
development of the MySQL database.
MySQL partnership logos may be used only by companies and persons
having a written partnership agreement with MySQL AB. Partnerships
include certification as a MySQL trainer or consultant.
For more information, please see section 1.3.1.5 Partnering.
MySQL in Printed Text or Presentations
MySQL AB welcomes references to the MySQL database, but
it should be noted that the word MySQL is a trademark of MySQL AB.
Because of this, you must append the trademark symbol (TM) to
the first or most prominent use of the word MySQL in a text and,
where appropriate, state that MySQL is a trademark of
MySQL AB. For more information, please refer to our trademark policy at
http://www.mysql.com/company/trademark.html.
MySQL in Company and Product Names
Use of the word MySQL in product or company names or in Internet
domain names is not allowed without written permission from MySQL AB.
Long promised by MySQL AB and long awaited by our users,
MySQL Server 4.0 is now available in production version.
MySQL 4.0 is available for download from http://www.mysql.com/ and from our mirrors. MySQL 4.0 has been tested by a large number of users and is in production use at many large sites.
The major new features of MySQL Server 4.0 are geared toward our existing business and community users, enhancing the MySQL database software as the solution for mission-critical, heavy-load database systems. Other new features target the users of embedded databases.
MySQL Version 4.0.12 was declared stable for production use in March 2003. This means that, in future, only bug fixes will be done for the 4.0 release series and only critical bug fixes will be done for the older 3.23 series. See section 2.5.2 Upgrading From Version 3.23 to 4.0.
New features to the MySQL software are being added to MySQL 4.1
which is now also available (alpha version).
See section 1.6 MySQL 4.1 In A Nutshell.
INSERTs, searching on
packed indexes, creation of FULLTEXT indexes, and COUNT(DISTINCT).
InnoDB storage engine is now offered as a standard feature of the
MySQL server. This means full support for ACID transactions,
foreign keys with cascading UPDATE/DELETE, and row-level locking
are now standard features.
See section 7.5 InnoDB Tables.
FULLTEXT search properties of MySQL Server 4.0 enables
FULLTEXT indexing of large text masses with both binary
and natural-language searching logic. You can customise minimal word
length and define your own stop word lists in any human language,
enabling a new set of applications to be built on MySQL Server.
See section 6.8 MySQL Full-text Search.
TRUNCATE TABLE (as in Oracle) and IDENTITY
as a synonym for automatically incremented keys (as in Sybase).
UNION statement, a long-awaited standard SQL feature.
MySQL now
supports a new character set, latin1_de, which ensures that the
German sorting order sorts words with umlauts in the same order
as do German telephone books.
mysqld parameters (startup options) can now be set without taking
down the servers. This is a convenient feature for Database Administrators (DBAs).
See section 5.5.6 SET Syntax.
DELETE and UPDATE statements have been added..
symbolic linking to MyISAM at the table
level (and not just the database level as before) and for enabling
symlink handling by default on Windows.
SQL_CALC_FOUND_ROWS and FOUND_ROWS() are new functions that make it
possible to find out the number of rows a SELECT query that includes a
LIMIT clause would have returned without that clause.
The news section of this manual includes a more in-depth list of features. See section D.3 Changes in release 4.0.x (Production).
libmysqld makes MySQL Server suitable for a vastly expanded realm of
applications. Using the embedded MySQL server library, one can
embed MySQL Server into various applications and electronics devices, where
the end user has no knowledge of there actually being an underlying
database. Embedded MySQL Server is ideal for use behind
the scenes in Internet appliances, public kiosks, turnkey
hardware/software combination units, high performance Internet
servers, self-contained databases distributed on CD-ROM, and so on.
Many users of libmysqld will benefit from the MySQL
Dual Licensing. For those not wishing to be bound by the GPL,
the software is also made available under a commercial license.
The embedded MySQL library uses the same interface as the normal
client library, so it is convenient and easy to use. See section 8.1.15 libmysqld, the Embedded MySQL Server Library.
MySQL Server 4.0 laid the foundation for new features such as nested subqueries and Unicode (implemented in version 4.1) and for the work on SQL-99 stored procedures being done for version 5.0. These features come at the top of the wish list of many of our customers.
With these additions, critics of the MySQL Database Server have to be more imaginative than ever in pointing out deficiencies in the MySQL Database Management System. Already well-known for its stability, speed, and ease of use, MySQL Server will be able to fulfill the requirement checklists of very demanding buyers.
The features listed in this section are implemented in MySQL 4.1. Few other features are still planned for MySQL 4.1. See section 1.9.1 New Features Planned For 4.1.
Most new features being coded, such as stored procedures, will be available in MySQL 5.0. See section 1.9.2 New Features Planned For 5.0.
SELECT * FROM t1 WHERE t1.a=(SELECT t2.b FROM t2); SELECT * FROM t1 WHERE (1,2,3) IN (SELECT a,b,c FROM t2);
FROM clause of a SELECT statement. Here is
an example:
SELECT t1.a FROM t1, (SELECT * FROM t2) t3 WHERE t1.a=t3.a;
BTREE indexing is now supported for HEAP tables,
significantly improving response time for non-exact searches.
CREATE TABLE table LIKE table allows you to create a new table
with the exact structure of an existing table, using a single command.
SHOW WARNINGS shows warnings for the last command.
See section 4.5.7.9 SHOW WARNINGS | ERRORS.
HELP command that can be used in the mysql command line
client (and other clients) to get help for SQL commands.
The advantage of having this information on the server side is that the
information is always applicable for that particular server version.
INSERT ... ON DUPLICATE KEY UPDATE ... syntax has been
implemented. This allows you to UPDATE an existing row if the
INSERT would have caused a duplicate in a PRIMARY or
UNIQUE key (index).
See section 6.4.3 INSERT Syntax.
GROUP_CONCAT(),
adding the extremely useful capability of concatenating columns from
grouped rows into a single result string.
See section 6.3.7 Functions for Use with GROUP BY Clauses.
The news section in this manual includes a more in-depth list of features. See section D.2 Changes in release 4.1.x (Alpha).
New features are being added to MySQL 4.1, which is already available for download (alpha version). See section 1.6.3 Ready for Immediate Development Use.
The set of features that are being added to version 4.1 is mostly fixed. Additional development is already ongoing for version 5.0. MySQL 4.1 will go through the steps of Alpha (during which time new features might still be added/changed), Beta (when we have feature freeze and only bug corrections will be done), and Gamma (indicating that a production release is just weeks ahead). At the end of this process, MySQL 4.1 will become the new production release.
MySQL 4.1 is currently in the alpha stage, and binaries are available for download at http://www.mysql.com/downloads/mysql-4.1.html. All binary releases pass our extensive test suite without any errors on the platforms on which we test. See section D.2 Changes in release 4.1.x (Alpha).
New development for MySQL is focused on the 5.0 release, featuring Stored Procedures and other new features. See section 1.9.2 New Features Planned For 5.0.
For those wishing to take a look at the bleeding edge of MySQL development, we have already made our BitKeeper repository for MySQL version 5.0 publically available. See section 2.3.4 Installing from the Development Source Tree.
This section introduces you to the MySQL mailing lists and gives some guidelines as to how the lists should be used. When you subscribe to a mailing list, you will receive, as e-mail messages, all postings to the list. You will also be able to send your own questions and answers to the list.
To subscribe to the main MySQL mailing list, send a message to the electronic mail address mysql-subscribe@lists.mysql.com.
To unsubscribe from the main MySQL mailing list, send a message to the electronic mail address mysql-unsubscribe@lists.mysql.com.
When subscribing and unsubscribing, only the address to which you send your message is significant. The subject line and the body of the message are ignored.
If your reply address is not valid, you can specify your address
explicitly by adding a hyphen to the subscribe or unsubscribe command
word, followed by your address with the `@' character in your
address replaced by a `='. For example, to subscribe
your_name@host.domain, send a message to
mysql-subscribe-your_name=host.domain@lists.mysql.com.
Mail to mysql-subscribe@lists.mysql.com or mysql-unsubscribe@lists.mysql.com is handled automatically by the ezmlm mailing list processor. Information about ezmlm is available at the ezmlm web site (http://www.ezmlm.org/).
To post a message to the list itself, send your message to
mysql@lists.mysql.com. Please do not send mail about
subscribing or unsubscribing to mysql@lists.mysql.com because all
mail sent to that address is distributed automatically to thousands of other
users.
Your local site may have many subscribers to mysql@lists.mysql.com.
If so, it may have a local mailing list, so that messages sent from
lists.mysql.com to your site are propagated to the local list. In such
cases, please contact your system administrator to be added to or dropped
from the local MySQL list.
If you wish to have traffic for a mailing list go to a separate mailbox in
your mail program, set up a filter based on the message headers. You can
use either the List-ID: or Delivered-To: headers to identify
list messages.
The MySQL mailing lists are as follows:
announce-subscribe@lists.mysql.com announce
mysql-subscribe@lists.mysql.com mysql
mysql-digest-subscribe@lists.mysql.com mysql-digest
mysql list in digest form. Subscribing to this list means
you will get all list messages, sent as one large mail message once a day.
bugs-subscribe@lists.mysql.com bugs
MySQL or if you want to be
actively involved in the process of bug hunting and fixing.
See section 1.7.1.3 How to Report Bugs or Problems.
bugs-digest-subscribe@lists.mysql.com bugs-digest
bugs list in digest form.
internals-subscribe@lists.mysql.com internals
internals-digest-subscribe@lists.mysql.com internals-digest
internals list in digest form.
mysqldoc-subscribe@lists.mysql.com mysqldoc
mysqldoc-digest-subscribe@lists.mysql.com mysqldoc-digest
mysqldoc list in digest form.
benchmarks-subscribe@lists.mysql.com benchmarks
benchmarks-digest-subscribe@lists.mysql.com benchmarks-digest
benchmarks list in digest form.
packagers-subscribe@lists.mysql.com packagers
packagers-digest-subscribe@lists.mysql.com packagers-digest
packagers list in digest form.
java-subscribe@lists.mysql.com java
java-digest-subscribe@lists.mysql.com java-digest
java list in digest form.
win32-subscribe@lists.mysql.com win32
win32-digest-subscribe@lists.mysql.com win32-digest
win32 list in digest form.
myodbc-subscribe@lists.mysql.com myodbc
myodbc-digest-subscribe@lists.mysql.com myodbc-digest
myodbc list in digest form.
mysqlcc-subscribe@lists.mysql.com mysqlcc
MySQL Control Center graphical client.
mysqlcc-digest-subscribe@lists.mysql.com mysqlcc-digest
mysqlcc list in digest form.
plusplus-subscribe@lists.mysql.com plusplus
plusplus-digest-subscribe@lists.mysql.com plusplus-digest
plusplus list in digest form.
msql-mysql-modules-subscribe@lists.mysql.com msql-mysql-modules
Perl support for MySQL with msql-mysql-modules,
which is now named DBD-mysql.
msql-mysql-modules-digest-subscribe@lists.mysql.com msql-mysql-modules-digest
msql-mysql-modules list in digest form.
You subscribe or unsubscribe to all lists using the same method described at the
beginning of this section. For example, to subscribe to or
unsubscribe from the myodbc list, send a message to
myodbc-subscribe@lists.mysql.com or
myodbc-unsubscribe@lists.mysql.com.
If you're unable to get an answer to your question(s) from a MySQL mailing list, one
option is to pay for support from MySQL AB. This will put you
in direct contact with MySQL developers. See section 1.4.1 Support Offered by MySQL AB.
The following table shows some MySQL mailing lists in languages other than English. These lists are not operated by MySQL AB, so we can't guarantee their quality.
mysql-france-subscribe@yahoogroups.com A French mailing list
list@tinc.net A Korean mailing list
subscribe mysql your@e-mail.address to this list.
mysql-de-request@lists.4t2.com A German mailing list
subscribe mysql-de your@e-mail.address to this list.
You can find information about this mailing list at
http://www.4t2.com/mysql/.
mysql-br-request@listas.linkway.com.br A Portuguese mailing list
subscribe mysql-br your@e-mail.address to this list.
mysql-alta@elistas.net A Spanish mailing list
subscribe mysql your@e-mail.address to this list.
Before posting a bug report or question, please do the following:
If you can't find an answer in the manual or the archives, check with your local MySQL expert. If you still can't find an answer to your question, please follow the guidelines on sending mail to mysql@lists.mysql.com, outlined in the next section, before contacting us.
Our bugs database is public, and can be browsed and searched by anyone at http://bugs.mysql.com/. If you log into the system, you will also be able to enter new reports.
Writing a good bug report takes patience, but doing it right the first time saves time both for us and for yourself. A good bug report, containing a full test case for the bug, makes it very likely that we will fix the bug in the next release. This section will help you write your report correctly so that you don't waste your time doing things that may not help us much or at all.
We encourage everyone to use the mysqlbug script to generate a bug
report (or a report about any problem). mysqlbug can be
found in the `scripts' directory (source distribution) and in the
`bin' directory under your MySQL installation directory (binary distribution).
If you are unable to use mysqlbug (for instance, if you are running
on Windows), it is still vital that you include all the necessary information
noted in this section (most importantly a description of the operating system
and the MySQL version).
The mysqlbug script helps you generate a report by determining much
of the following information automatically, but if something important is
missing, please include it with your message. Please read this section
carefully and make sure that all the information described here is included
in your report.
Preferably, you should test the problem using the latest production or
development version of MySQL Server before posting. Anyone should be
able to repeat the bug by just using 'mysql test < script' on the
included test case or run the shell or Perl script that is included in the
bug report.
All bugs posted in the bugs database or on the bugs@lists.mysql.com list will be corrected or documented in the next MySQL release. If only minor code changes are needed to correct a problem, we will also post a patch that fixes the problem.
The normal place to report bugs is http://bugs.mysql.com/.
If you have found a sensitive security bug in MySQL, please send an e-mail to security@mysql.com.
If you have a repeatable bug report, please report this into the bugs
database at http://bugs.mysql.com/. Note that even in this case
it's good to run the mysqlbug script first to find information
about your system. Any bug that we are able to repeat has a high chance
of being fixed in the next MySQL release.
To report other problem, you can use mysql@lists.mysql.com.
Remember that it is possible for us to respond to a message containing too much information, but not to one containing too little. People often omit facts because they think they know the cause of a problem and assume that some details don't matter. A good principle is: if you are in doubt about stating something, state it. It is a thousand times faster and less troublesome to write a couple of lines more in your report than to be forced to ask again and wait for the answer because you didn't include enough information the first time.
The most common errors made in bug reports are (a) not including the version number of the MySQL distribution used and (b) not fully describing the platform on which the MySQL server is installed (including the platform type and version number). This is highly relevant information, and in 99 cases out of 100 the bug report is useless without it. Very often we get questions like, ``Why doesn't this work for me?'' Then we find that the feature requested wasn't implemented in that MySQL version, or that a bug described in a report has already been fixed in newer MySQL versions. Sometimes the error is platform-dependent; in such cases, it is next to impossible for us to fix anything without knowing the operating system and the version number of the platform.
Remember also to provide information about your compiler, if it is related to the problem. Often people find bugs in compilers and think the problem is MySQL-related. Most compilers are under development all the time and become better version by version. To determine whether your problem depends on your compiler, we need to know what compiler you use. Note that every compiling problem should be regarded as a bug and reported accordingly.
It is most helpful when a good description of the problem is included in the bug report. That is, give a good example of all the things you did that led to the problem and describe, in exact detail, the problem itself. The best reports are those that include a full example showing how to reproduce the bug or problem. See section E.1.6 Making a Test Case If You Experience Table Corruption.
If a program produces an error message, it is very important to include the message in your report. If we try to search for something from the archives using programs, it is better that the error message reported exactly matches the one that the program produces. (Even the case should be observed.) You should never try to remember what the error message was; instead, copy and paste the entire message into your report.
If you have a problem with MyODBC, please try to generate a MyODBC trace file and send it with your report. See section 8.2.7 Reporting Problems with MyODBC.
Please remember that many of the people who will read your report will
do so using an 80-column display. When generating reports or examples
using the mysql command-line tool, you should therefore use
the --vertical option (or the \G statement terminator)
for output that would exceed the available width for such a display
(for example, with the EXPLAIN SELECT statement; see the
example later in this section).
Please include the following information in your report:
mysqladmin version. mysqladmin can be
found in the `bin' directory under your MySQL installation
directory.
uname -a. If
you work with Windows, you can usually get the name and version number
by double-clicking your ''My Computer'' icon and pulling down the ''Help/About Windows''
menu.
mysqld died, you should also report the query that crashed
mysqld. You can usually find this out by running mysqld with
logging enabled. See section E.1.5 Using Log Files to Find Cause of Errors in mysqld.
mysqldump --no-data db_name tbl_name1 tbl_name2 .... This is very easy
to do and is a powerful way to get information about any table in a database.
The information will help us create a situation matching the one you have.
SELECT statements, you
should always include the output of EXPLAIN SELECT ..., and at
least the number of rows that the SELECT statement produces. You
should also include the output from SHOW CREATE TABLE tbl_name
for each involved table. The more information you give about your
situation, the more likely it is that someone can help you. The following
is an example of a very good bug report (it
should of course be posted with the mysqlbug script).
Example run using the mysql command-line tool (note the use of the
\G statement terminator for statements whose output width would
otherwise exceed that of an 80-column display device):
mysql> SHOW VARIABLES;
mysql> SHOW COLUMNS FROM ...\G
<output from SHOW COLUMNS>
mysql> EXPLAIN SELECT ...\G
<output from EXPLAIN>
mysql> FLUSH STATUS;
mysql> SELECT ...;
<A short version of the output from SELECT,
including the time taken to run the query>
mysql> SHOW STATUS;
<output from SHOW STATUS>
mysqld, try to provide an
input script that will reproduce the anomaly. This script should include any
necessary source files. The more closely the script can reproduce your
situation, the better. If you can make a reproducible test case, you should
post it on http://bugs.mysql.com/ for high-priority treatment.
If you can't provide a script, you should at least include the output
from mysqladmin variables extended-status processlist in your mail to
provide some information on how your system is performing.
mysqldump and create a `README' file
that describes your problem.
Create a compressed archive of your files using
tar and gzip or zip, and use ftp to transfer the
archive to ftp://support.mysql.com/pub/mysql/secret/. Then enter
the problem into our bugs database at http://bugs.mysql.com/.
ftp to transfer it to
ftp://support.mysql.com/pub/mysql/secret/. If the data is really top
secret and you don't want to show it even to us, then go ahead and provide
an example using other names, but please regard this as the last choice.
mysqld
daemon as well as the options that you use to run any MySQL client programs. The
options to programs like mysqld and mysql, and to the
configure script, are often keys to answers and are very relevant.
It is never a bad idea to include them. If you use any modules, such
as Perl or PHP, please include the version number(s) of those as well.
mysqlaccess, the output of mysqladmin reload, and all
the error messages you get when trying to connect. When you test your
privileges, you should first run mysqlaccess. After this, execute
mysqladmin reload version and try to connect with the program that
gives you trouble. mysqlaccess can be found in the `bin'
directory under your MySQL installation directory.
parse error, please check your syntax closely. If
you can't find something wrong with it, it's extremely likely that your
current version of MySQL Server doesn't support the syntax you are
using. If you are using the current version and the manual at
http://www.mysql.com/doc/ doesn't cover the
syntax you are using, MySQL Server doesn't support your query. In this
case, your only options are to implement the syntax yourself or e-mail
licensing@mysql.com and ask for an offer to implement it.
If the manual covers the syntax you are using, but you have an older version
of MySQL Server, you should check the MySQL change history to see
when the syntax was implemented. In this case, you have the option of
upgrading to a newer version of MySQL Server. See section D MySQL Change History.
myisamchk or CHECK TABLE and
REPAIR TABLE. See section 4 Database Administration.
mysqld should never crash a table if nothing killed it in the
middle of an update. If you can find the cause of mysqld dying,
it's much easier for us to provide you with a fix for the problem.
See section A.1 How to Determine What Is Causing Problems.
If you are a support customer, please cross-post the bug report to mysql-support@mysql.com for higher-priority treatment, as well as to the appropriate mailing list to see if someone else has experienced (and perhaps solved) the problem.
For information on reporting bugs in MyODBC, see section 8.2.4 How to Report Problems with MyODBC.
For solutions to some common problems, see section A Problems and Common Errors.
When answers are sent to you individually and not to the mailing list, it is considered good etiquette to summarise the answers and send the summary to the mailing list so that others may have the benefit of responses you received that helped you solve your problem.
If you consider your answer to have broad interest, you may want to post it to the mailing list instead of replying directly to the individual who asked. Try to make your answer general enough that people other than the original poster may benefit from it. When you post to the list, please make sure that your answer is not a duplication of a previous answer.
Try to summarise the essential part of the question in your reply; don't feel obliged to quote the entire original message.
Please don't post mail messages from your browser with HTML mode turned on. Many users don't read mail with a browser.
In addition to the various MySQL mailing lists, you can find experienced
community people on IRC (Internet Relay Chat).
These are the best networks/channels currently known to us:
#mysql
Primarily MySQL questions but other database and SQL questions welcome.
#mysqlphp
Questions about MySQL+PHP, a popular combination.
#mysqlperl
Questions about MySQL+Perl, another popular combination.
#mysql
MySQL questions.
If you are looking for IRC client software to connect to an IRC network,
take a look at X-Chat (http://www.xchat.org/).
X-Chat (GPL licensed) is available for Unix as well as for Windows platforms.
This section describes how MySQL relates to the ANSI/ISO SQL standards. MySQL Server has many extensions to the SQL standard, and here you will find out what they are and how to use them. You will also find information about functionality missing from MySQL Server, and how to work around some differences.
Our goal is to not, without a very good reason, restrict MySQL Server usability for any usage. Even if we don't have the resources to do development for every possible use, we are always willing to help and offer suggestions to people who are trying to use MySQL Server in new territories.
One of our main goals with the product is to continue to work toward
compliance with the SQL-99 standard, but without sacrificing speed or reliability.
We are not afraid to add extensions to SQL or support for non-SQL
features if this greatly increases the usability of MySQL Server for a big
part of our users. (The new HANDLER interface in MySQL Server 4.0
is an example of this strategy. See section 6.4.2 HANDLER Syntax.)
We will continue to support transactional and non-transactional databases to satisfy both heavy web/logging usage and mission-critical 24/7 usage.
MySQL Server was designed from the start to work with medium size databases (10-100 million rows, or about 100 MB per table) on small computer systems. We will continue to extend MySQL Server to work even better with terabyte-size databases, as well as to make it possible to compile a reduced MySQL version that is more suitable for hand-held devices and embedded usage. The compact design of the MySQL server makes both of these directions possible without any conflicts in the source tree.
We are currently not targeting realtime support or clustered databases (even if you can already do a lot of things with our replication services).
We don't believe that one should have native XML support in the database, but will instead add the XML support our users request from us on the client side. We think it's better to keep the main server code as ``lean and clean'' as possible and instead develop libraries to deal with the complexity on the client side. This is part of the strategy mentioned previously of not sacrificing speed or reliability in the server.
Entry-level SQL-92. ODBC levels 0-3.51.
We are aiming toward supporting the full SQL-99 standard, but without concessions to speed and quality of the code.
If you start mysqld with the --ansi option, the following
behaviour of MySQL Server changes:
|| is string concatenation instead of OR.
mysql.user table, you have to quote it:
SELECT "user" FROM mysql."user";
REAL will be a synonym for FLOAT instead of a synonym for
DOUBLE.
SERIALIZABLE.
See section 6.7.3 SET TRANSACTION Syntax.
GROUP BY that is not in the
field list.
This is the same as starting mysqld with
--sql-mode=REAL_AS_FLOAT,PIPES_AS_CONCAT,ANSI_QUOTES,
IGNORE_SPACE,ONLY_FULL_GROUP_BY --transaction-isolation=serializable
or in MySQL 4.1
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE; SET SQL_MODE="REAL_AS_FLOAT,PIPES_AS_CONCAT,ANSI_QUOTES,IGNORE_SPACE,ONLY_FULL_GROUP_BY";
In MySQL 4.1.1 the last SQL_MODE option can be also be given with:
SET SQL_MODE="ansi";
In the above case the SQL_MODE will be set to all options that
are relevant for the ANSI MODE. You can check the result by
doing:
mysql> SET SQL_MODE="ansi";
mysql> SELECT @@SQL_MODE;
-> "REAL_AS_FLOAT,PIPES_AS_CONCAT,ANSI_QUOTES,IGNORE_SPACE,ONLY_FULL_GROUP_BY,ANSI"
MySQL Server includes some extensions that you probably will not find in
other SQL databases. Be warned that if you use them, your code will not be
portable to other SQL servers. In some cases, you can write code that
includes MySQL extensions, but is still portable, by using comments
of the form /*! ... */. In this case, MySQL Server will parse and
execute the code within the comment as it would any other MySQL
statement, but other SQL servers will ignore the extensions. For example:
SELECT /*! STRAIGHT_JOIN */ col_name FROM table1,table2 WHERE ...
If you add a version number after the '!', the syntax will be
executed only if the MySQL version is equal to or newer than the used
version number:
CREATE /*!32302 TEMPORARY */ TABLE t (a INT);
This means that if you have Version 3.23.02 or newer, MySQL
Server will use the TEMPORARY keyword.
The following is a list of MySQL extensions:
MEDIUMINT, SET, ENUM, and the
different BLOB and TEXT types.
AUTO_INCREMENT, BINARY, NULL,
UNSIGNED, and ZEROFILL.
BINARY attribute or use the BINARY cast, which causes
comparisons to be done according to the ASCII order used on the
MySQL server host.
db_name.tbl_name syntax. Some SQL servers provide
the same functionality but call this User space.
MySQL Server doesn't support tablespaces as in:
create table ralph.my_table...IN my_tablespace.
LIKE is allowed on numeric columns.
INTO OUTFILE and STRAIGHT_JOIN in a SELECT
statement. See section 6.4.1 SELECT Syntax.
SQL_SMALL_RESULT option in a SELECT statement.
EXPLAIN SELECT to get a description of how tables are joined.
INDEX or KEY in a CREATE TABLE
statement. See section 6.5.3 CREATE TABLE Syntax.
TEMPORARY or IF NOT EXISTS with CREATE TABLE.
COUNT(DISTINCT list) where list has more than one element.
CHANGE col_name, DROP col_name, or DROP
INDEX, IGNORE or RENAME in an ALTER TABLE
statement. See section 6.5.4 ALTER TABLE Syntax.
RENAME TABLE. See section 6.5.5 RENAME TABLE Syntax.
ADD, ALTER, DROP, or CHANGE
clauses in an ALTER TABLE statement.
DROP TABLE with the keywords IF EXISTS.
DROP TABLE statement.
ORDER BY and LIMIT clauses of the UPDATE and
DELETE statements.
DELAYED clause of the INSERT and REPLACE
statements.
LOW_PRIORITY clause of the INSERT, REPLACE,
DELETE, and UPDATE statements.
LOAD DATA INFILE. In many cases, this syntax is compatible with
Oracle's LOAD DATA INFILE. See section 6.4.9 LOAD DATA INFILE Syntax.
ANALYZE TABLE, CHECK TABLE, OPTIMIZE TABLE, and
REPAIR TABLE statements.
SHOW statement.
See section 4.5.7 SHOW Syntax.
SET statement. See section 5.5.6 SET Syntax.
GROUP BY part.
This gives better performance for some very specific, but quite normal
queries.
See section 6.3.7 Functions for Use with GROUP BY Clauses.
ASC and DESC with GROUP BY.
|| and && operators to mean
logical OR and AND, as in the C programming language. In MySQL Server,
|| and OR are synonyms, as are && and AND.
Because of this nice syntax, MySQL Server doesn't support
the standard SQL-99 || operator for string concatenation; use
CONCAT() instead. Because CONCAT() takes any number
of arguments, it's easy to convert use of the || operator to
MySQL Server.
CREATE DATABASE or DROP DATABASE.
See section 6.5.1 CREATE DATABASE Syntax.
% operator is a synonym for MOD(). That is,
N % M is equivalent to MOD(N,M). % is supported
for C programmers and for compatibility with PostgreSQL.
=, <>, <= ,<, >=,>,
<<, >>, <=>, AND, OR, or LIKE
operators may be used in column comparisons to the left of the
FROM in SELECT statements. For example:
mysql> SELECT col1=1 AND col2=2 FROM tbl_name;
LAST_INSERT_ID() function.
See section 8.1.3.130 mysql_insert_id().
REGEXP and NOT REGEXP extended regular expression
operators.
CONCAT() or CHAR() with one argument or more than two
arguments. (In MySQL Server, these functions can take any number of
arguments.)
BIT_COUNT(), CASE, ELT(),
FROM_DAYS(), FORMAT(), IF(), PASSWORD(),
ENCRYPT(), MD5(), ENCODE(), DECODE(),
PERIOD_ADD(), PERIOD_DIFF(), TO_DAYS(), or
WEEKDAY() functions.
TRIM() to trim substrings. SQL-99 supports removal
of single characters only.
GROUP BY functions STD(), BIT_OR(),
BIT_AND(), and GROUP_CONCAT().
See section 6.3.7 Functions for Use with GROUP BY Clauses.
REPLACE instead of DELETE + INSERT.
See section 6.4.8 REPLACE Syntax.
FLUSH, RESET and DO statements.
:=:
SELECT @a:=SUM(total),@b=COUNT(*),@a/@b AS avg FROM test_table; SELECT @t1:=(@t2:=1)+@t3:=4,@t1,@t2,@t3;
We try to make MySQL Server follow the ANSI SQL standard (SQL-92/SQL-99) and the ODBC SQL standard, but in some cases MySQL Server does things differently:
VARCHAR columns, trailing spaces are removed when the value is
stored. See section 1.8.6 Known Errors and Design Deficiencies in MySQL.
CHAR columns are silently changed to VARCHAR
columns. See section 6.5.3.1 Silent Column Specification Changes.
REVOKE to revoke privileges for
a table. See section 4.3.1 GRANT and REVOKE Syntax.
NULL AND FALSE will evaluate to NULL and not to FALSE.
This is because we don't think it's good to have to evaluate a lot of
extra conditions in this case.
For a prioritised list indicating when new extensions will be added to MySQL Server, you should consult the online MySQL TODO list at http://www.mysql.com/doc/en/TODO.html. That is the latest version of the TODO list in this manual. See section 1.9 MySQL and The Future (The TODO).
SELECTsSubqueries are supported in MySQL version 4.1. See section 1.6.1 Features Available From MySQL 4.1.
Upto version 4.0, only nested queries of the form
INSERT ... SELECT ... and REPLACE ... SELECT ...
are supported.
You can, however, use the function IN() in other contexts.
You can often rewrite the query without a subquery:
SELECT * FROM table1 WHERE id IN (SELECT id FROM table2);
This can be rewritten as:
SELECT table1.* FROM table1,table2 WHERE table1.id=table2.id;
The queries:
SELECT * FROM table1 WHERE id NOT IN (SELECT id FROM table2);
SELECT * FROM table1 WHERE NOT EXISTS (SELECT id FROM table2
WHERE table1.id=table2.id);
Can be rewritten as:
SELECT table1.* FROM table1 LEFT JOIN table2 ON table1.id=table2.id
WHERE table2.id IS NULL;
Using a LEFT [OUTER] JOIN is generally much faster than an
equivalent subquery because the server can optimise it better,
a fact that is not specific to MySQL Server alone.
Prior to SQL-92, outer joins did not exist, so subqueries were the
only way to do certain things in those bygone days. But that is no
longer the case, MySQL Server and many other modern database
systems offer a whole range of outer joins types.
For more complicated subqueries you can often create temporary tables
to hold the subquery. In some cases, however, this option will not
work. The most frequently encountered of these cases arises with
DELETE statements, for which standard SQL does not support joins
(except in subqueries). For this situation there are three options
available:
SELECT query to obtain the primary keys
for the records to be deleted, and then use these values to construct
the DELETE statement (DELETE FROM ... WHERE ... IN (key1,
key2, ...)).
DELETE statements automatically, using the MySQL
extension CONCAT() (in lieu of the standard || operator).
For example:
SELECT CONCAT('DELETE FROM tab1 WHERE pkid = ', "'", tab1.pkid, "'", ';')
FROM tab1, tab2
WHERE tab1.col1 = tab2.col2;
You can place this query in a script file and redirect input from it to
the mysql command-line interpreter, piping its output back to a
second instance of the interpreter:
shell> mysql --skip-column-names mydb < myscript.sql | mysql mydb
MySQL Server 4.0 supports multi-table DELETEs that can be used to
efficiently delete rows based on information from one table or even
from many tables at the same time.
SELECT INTO TABLE
MySQL Server doesn't yet support the Oracle SQL extension:
SELECT ... INTO TABLE .... Instead, MySQL Server supports the
SQL-99 syntax INSERT INTO ... SELECT ..., which is basically
the same thing. See section 6.4.3.1 INSERT ... SELECT Syntax.
INSERT INTO tblTemp2 (fldID) SELECT tblTemp1.fldOrder_ID
FROM tblTemp1 WHERE tblTemp1.fldOrder_ID > 100;
Alternatively, you can use SELECT INTO OUTFILE... or
CREATE TABLE ... SELECT.
MySQL Server (version 3.23-max and all versions 4.0 and above) supports
transactions with the InnoDB and BDB
Transactional storage engines.
InnoDB provides full ACID compliance.
See section 7 MySQL Table Types.
The other non-transactional table types (such as MyISAM) in
MySQL Server follow a different paradigm for data integrity called
``Atomic Operations.'' In transactional terms, MyISAM
tables effectively always operate in AUTOCOMMIT=1 mode.
Atomic operations often offer comparable integrity with higher performance.
With MySQL Server supporting both paradigms, the user is able to decide if he needs the speed of atomic operations or if he needs to use transactional features in his applications. This choice can be made on a per-table basis.
As noted, the trade off for transactional vs. non-transactional table
types lies mostly in performance. Transactional tables have significantly
higher memory and diskspace requirements, and more CPU overhead.
That said, transactional table types such as InnoDB do of course
offer many unique features. MySQL Server's modular design allows the
concurrent use of all these different storage engines to suit different
requirements and deliver optimum performance in all situations.
But how does one use the features of MySQL Server to maintain rigorous
integrity even with the non-transactional MyISAM tables, and how
do these features compare with the transactional table types?
ROLLBACK instead of
COMMIT in critical situations, transactions are more
convenient. Transactions also ensure that unfinished updates or
corrupting activities are not committed to the database; the server is
given the opportunity to do an automatic rollback and your database is
saved.
MySQL Server, in almost all cases, allows you to resolve potential problems
by including simple checks before updates and by running simple scripts
that check the databases for inconsistencies and automatically repair
or warn if such an inconsistency occurs. Note that just by using the
MySQL log or even adding one extra log, one can normally fix tables
perfectly with no data integrity loss.
LOCK TABLES or atomic updates, ensuring
that you never will get an automatic abort from the server, which is
a common problem with transactional database systems.
The transactional paradigm has its benefits and its drawbacks. Many users and application developers depend on the ease with which they can code around problems where an abort appears to be, or is necessary. However, even if you are new to the atomic operations paradigm, or more familiar with transactions, do consider the speed benefit that non-transactional tables can offer on the order of three to five times the speed of the fastest and most optimally tuned transactional tables.
In situations where integrity is of highest importance, MySQL Server offers
transaction-level reliability and integrity even for non-transactional tables.
If you lock tables with LOCK TABLES, all updates will stall
until any integrity checks are made. If you only obtain a read lock
(as opposed to a write lock), reads and inserts are still allowed
to happen. The new inserted records will not be seen by any of the
clients that have a read lock until they release their read
locks. With INSERT DELAYED you can queue inserts into a local
queue, until the locks are released, without having the client wait
for the insert to complete. See section 6.4.4 INSERT DELAYED Syntax.
``Atomic,'' in the sense that we mean it, is nothing magical. It only means that you can be sure that while each specific update is running, no other user can interfere with it, and there will never be an automatic rollback (which can happen with transactional tables if you are not very careful). MySQL Server also guarantees that there will not be any dirty reads.
Following are some techniques for working with non-transactional tables:
LOCK TABLES, and you don't need cursors when you can update
records on the fly.
ROLLBACK, you can use the following strategy:
LOCK TABLES ... to lock all the tables you want to access.
UNLOCK TABLES to release your locks.
ROLLBACKs, although not always. The only situation
this solution doesn't handle is when someone kills the threads in the
middle of an update. In this case, all locks will be released but some
of the updates may not have been executed.
WHERE clause in the UPDATE statement. If the record wasn't
updated, we give the client a message: ''Some of the data you have changed
has been changed by another user.'' Then we show the old row versus the new
row in a window, so the user can decide which version of the customer record
he should use.
This gives us something that is similar to column locking but is actually
even better because we only update some of the columns, using values that
are relative to their current values. This means that typical UPDATE
statements look something like these:
UPDATE tablename SET pay_back=pay_back+125;
UPDATE customer
SET
customer_date='current_date',
address='new address',
phone='new phone',
money_he_owes_us=money_he_owes_us-125
WHERE
customer_id=id AND address='old address' AND phone='old phone';
As you can see, this is very efficient and works even if another client
has changed the values in the pay_back or money_he_owes_us
columns.
ROLLBACK and/or LOCK
TABLES for the purpose of managing unique identifiers for some tables.
This can be handled much more efficiently by using an
AUTO_INCREMENT column and either the SQL function
LAST_INSERT_ID() or the C API function mysql_insert_id().
See section 8.1.3.130 mysql_insert_id().
You can generally code around row-level locking. Some situations really
need it, but they are very few. InnoDB tables support row-level
locking. With MyISAM, you can use a flag column in the table and do
something like the following:
UPDATE tbl_name SET row_flag=1 WHERE id=ID;MySQL returns 1 for the number of affected rows if the row was found and
row_flag wasn't already 1 in the original row.
You can think of it as though MySQL Server changed the preceding query to:
UPDATE tbl_name SET row_flag=1 WHERE id=ID AND row_flag <> 1;
Stored procedures are being implemented in our version 5.0 development tree. See section 2.3.4 Installing from the Development Source Tree.
This effort is based on SQL-99, which has a basic syntax similar (but not identical) to Oracle PL/SQL. In addition to this, we are implementing the SQL-99 framework to hook in external languages.
A stored procedure is a set of SQL commands that can be compiled and stored in the server. Once this has been done, clients don't need to keep re-issuing the entire query but can refer to the stored procedure. This provides better overall performance because the query has to be parsed only once, and less information needs to be sent between the server and the client. You can also raise the conceptual level by having libraries of functions in the server. However, stored procedures of course do increase the load on the database server system, as more of the work is done on the server side and less on the client (application) side.
Triggers will also be implemented. A trigger is effectively a type of stored procedure, one that is invoked when a particular event occurs. For example, you can install a stored procedure that is triggered each time a record is deleted from a transaction table and that stored procedure automatically deletes the corresponding customer from a customer table when all his transactions are deleted.
In MySQL Server 3.23.44 and up, InnoDB tables support checking of
foreign key constraints, including CASCADE, ON DELETE, and
ON UPDATE. See section 7.5.5.2 Foreign Key Constraints.
For other table types, MySQL Server only parses the FOREIGN KEY
syntax in CREATE TABLE commands, but does not use/store this info.
Note that foreign keys in SQL are not used to join tables, but are used
mostly for checking referential integrity (foreign key constraints). If
you want to get results from multiple tables from a SELECT
statement, you do this by joining tables:
SELECT * FROM table1,table2 WHERE table1.id = table2.id;
See section 6.4.1.1 JOIN Syntax. See section 3.5.6 Using Foreign Keys.
When used as a constraint, FOREIGN KEYs don't need to be used if
the application inserts rows into MyISAM tables in the proper order.
For MyISAM tables, you can work around the lack of ON DELETE
by adding the appropriate DELETE statement to an application when you
delete records from a table that has a foreign key. In practice this is as
quick (in some cases quicker) and much more portable than using foreign keys.
In MySQL Server 4.0 you can use multi-table delete to delete rows from many
tables with one command. See section 6.4.6 DELETE Syntax.
The FOREIGN KEY syntax without ON DELETE ... is often used
by ODBC applications to produce automatic WHERE clauses.
In the near future we will extend the FOREIGN KEY implementation
so that the information is stored in the table specification file
and may be retrieved by mysqldump and ODBC. At a later stage we
will implement foreign key constraints for MyISAM tables as well.
Do keep in mind that foreign keys are often misused, which can cause severe problems. Even when used properly, it is not a magic solution for the referential integrity problem, although it can make things easier.
Some advantages of foreign key enforcement:
Disadvantages:
We plan to implement views in MySQL Server in version 5.1
Historically, MySQL Server has been most used in applications and on web systems where the application writer has full control over database usage. Of course, usage has shifted over time, and so we find that an increasing number of users now regard views as an important aspect.
Views are useful for allowing users to access a set of relations as if it were a single table, and limiting their access to just that. Many DBMS don't allow updates to a view, instead you have to perform the updates on the individual tables.
Views can also be used to restrict access to rows (a subset of a particular table). One does not need views to restrict access to columns, as MySQL Server has a sophisticated privilege system. See section 4.2 General Security Issues and the MySQL Access Privilege System.
In designing our implementation of views, we aim toward (as fully as possible within the confines of SQL) compliance with ``Codd's Rule #6'' for relational database systems: all views that are theoretically updatable, should in practice also be updatable. This is a complex issue, and we are taking the time to make sure we get it right.
The implementation itself will be done in stages.
Unnamed views (derived tables, a subquery in the FROM
clause of a SELECT) are already implemented in version 4.1.
Note: If you are an enterprise level user with an urgent need for views, please contact sales@mysql.com to discuss sponsoring options. Targeted financing of this particular effort by one or more companies would allow us to allocate additional resources to it. One example of a feature sponsored in the past is replication.
Some other SQL databases use `--' to start comments.
MySQL Server has `#' as the start comment character. You can also use
the C comment style /* this is a comment */ with MySQL Server.
See section 6.1.6 Comment Syntax.
MySQL Server Version 3.23.3 and above support the `--' comment style,
provided the comment is followed by a space. This is because this
comment style has caused many problems with automatically generated
SQL queries that have used something like the following code, where
we automatically insert the value of the payment for
!payment!:
UPDATE tbl_name SET credit=credit-!payment!
Think about what happens if the value of payment is negative.
Because 1--1 is legal in SQL, the consequences of allowing
comments to start with `--' are terrible.
Using our implementation of this method of commenting in MySQL Server
Version 3.23.3 and up, 1-- This is a comment is actually safe.
Another safe feature is that the mysql command-line client
removes all lines that start with `--'.
The following information is relevant only if you are running a MySQL version earlier than 3.23.3:
If you have a SQL program in a text file that contains `--' comments you should use:
shell> replace " --" " #" < text-file-with-funny-comments.sql \
| mysql database
instead of the usual:
shell> mysql database < text-file-with-funny-comments.sql
You can also edit the command file ``in place'' to change the `--' comments to `#' comments:
shell> replace " --" " #" -- text-file-with-funny-comments.sql
Change them back with this command:
shell> replace " #" " --" -- text-file-with-funny-comments.sql
$findex constraints
As MySQL allows you to work with both transactional and non-transactional tables (which don't allow rollback), constraint handling is a bit different in MySQL than in other databases.
We have to handle the case when you have updated a lot of rows with a non-transactional table which can't rollback on errors.
The basic philosophy is to try to give an error for anything that we can detect on compile time but try to recover from any errors we get run time. We do this in most cases, but not yet for all. See section 1.9.4 New Features Planned For The Near Future.
The basic options MySQL has is to stop the statement in the middle or do it's best to recover from the problem and continue.
Here follows what happens with the different types of constraints.
$cindex PRIMARY KEY, constraint $cindex foreign key, constraint $cindex UNIQUE, constraint
Normally you will get an error when you try to INSERT /
UPDATE a row that causes a primary key, unique key or foreign key
violation. If you are using a transactional storage engine, like
InnoDB, MySQL will automatically roll back the transaction. If you are
using a non-transactional storage engine MySQL will stop at the wrong
row and leave the rest of the rows unprocessed.
To make life easier MySQL has added support for the IGNORE
directive to most commands that can cause a key violation (like
INSERT IGNORE ...). In this case MySQL will ignore any key
violation and continue with processing the next row. You can get
information of what MySQL did with the mysql_info() API function
and in later MySQL 4.1 version with the SHOW WARNINGS
command. See section 8.1.3.122 mysql_info(). See section 4.5.7.9 SHOW WARNINGS | ERRORS.
Note that for the moment only InnoDB tables support foreign keys. This is scheduled to be fixed in the MySQL 5.0 source tree. See section 7.5.5.2 Foreign Key Constraints.
NOT NULL and DEFAULT valuesTo be able to support easy handling of non-transactional tables all fields in MySQL have default values.
If you insert a 'wrong' value in a column like a NULL in a
NOT NULL column or a too big numerical value in a numerical
column, MySQL will instead of giving an error instead set the column to
the 'best possible value'. For numerical values this is 0, the smallest
possible values or the largest possible value. For strings this is
either the empty string or the longest possible string that can be in
the column.
This means that if you try to store NULL into a column that
doesn't take NULL values, MySQL Server will store 0 or ''
(empty string) in it instead. This last behaviour can, for single row
inserts, be changed with the -DDONT_USE_DEFAULT_FIELDS compile
option.) See section 2.3.3 Typical configure Options.
This causes INSERT statements to generate an error unless you
explicitly specify values for all columns that require a non-NULL
value.
The reason for the above rules is that we can't check these conditions before the query starts to execute. If we encounter a problem after updating a few rows, we can't just rollback as the table type may not support this. The option to stop is not that good as in this case the update would be 'half done' which is probably the worst possible scenario. In this case it's better to 'do the best you can' and then continue as if nothing happened. In MySQL 5.0 we plan to do this a bit better by providing warnings for automatic field conversions and an option to let you rollback statements that only uses transactional tables if ane does a wrong field assignment.
The above means that one should not use MySQL to check fields content, but one should do this in the application.
ENUM and SET
In MySQL 4.x ENUM is not a real constrain but a more efficient
way to store fields that can only contain a given set of values.
This is because of the same reasons NOT NULL is not honoured.
See section 1.8.5.2 Constraint NOT NULL and DEFAULT values.
If you insert an wrong value in an ENUM field, it will be set to
the reserved enum number 0, which will be displayed as an empty
string in string context. See section 6.2.3.3 The ENUM Type.
If you insert an wrong option in a SET field, the wrong value
will be ignored. See section 6.2.3.4 The SET Type.
The following known errors/bugs are not fixed in MySQL 3.23 because fixing them would involves changing a lot of code which could introduce other even worse bugs. The bugs are also classified as 'not fatal' or 'bearable'.
LOCK TABLE on multiple tables
and then in the same connection doing a DROP TABLE on one of
them while another thread is trying to lock the table. One can however
do a KILL on any of the involved threads to resolve this.
Fixed in 4.0.12.
SELECT MAX(key_column) FROM t1,t2,t3... where one of the tables are
empty doesn't return NULL but instead the maximum value for the
column. Fixed in 4.0.11.
DELETE FROM heap_table without a WHERE doesn't work on a locked
HEAP table.
The following problems are known and fixing them is a high priority:
ANALYZE TABLE on a BDB table may in some cases make the table
unusable until one has restarted mysqld. When this happens you will
see errors like the following in the MySQL error file:
001207 22:07:56 bdb: log_flush: LSN past current end-of-log
FROM part, but silently
ignores them. The reason for not giving an error is that many clients
that automatically generates queries adds parentheses in the FROM
part even if they are not needed.
RIGHT JOINS or combining LEFT and
RIGHT join in the same query may not give a correct answer as
MySQL only generates NULL rows for the table preceding a LEFT or
before a RIGHT join. This will be fixed in 5.0 at the same time
we add support for parentheses in the FROM part.
ALTER TABLE on a BDB table on which you are
running multi-statement transactions until all those transactions complete.
(The transaction will probably be ignored.)
ANALYZE TABLE, OPTIMIZE TABLE, and REPAIR TABLE may
cause problems on tables for which you are using INSERT DELAYED.
LOCK TABLE ... and FLUSH TABLES ... doesn't
guarantee that there isn't a half-finished transaction in progress on the
table.
mysql client on the
database if you are not using the -A option or if you are using
rehash. This is especially notable when you have a big table
cache.
CREATE ... SELECT or
INSERT ... SELECT which
feeds zeros or NULLs into an auto_increment column.
DELETE if you are
deleting rows from a table which has foreign keys with ON DELETE
CASCADE properties.
REPLACE ... SELECT,
INSERT IGNORE ... SELECT if you have
duplicate key values in the inserted data.
ORDER BY
clause guarantiing a deterministic order.
Indeed, for example for INSERT ... SELECT with no ORDER
BY, the SELECT may return rows in a different order
(which will result in a row having different ranks, hence getting a
different number in the auto_increment column),
depending on the choices made by the optimisers on the master and
slave. A query will be optimised differently on the master and slave only if :
OPTIMIZE TABLE was run on the master tables and not on
the slave tables (to fix this, since MySQL 4.1.1, OPTIMIZE, ANALYZE
and REPAIR are written to the binary log).
key_buffer_size etc) are different on
the master and slave.
mysqlbinlog|mysql.
The easiest way to avoid this problem iin all cases is add an
ORDER BY clause to
such non-deterministic queries to ensure that the rows are always
stored/modified in the same order.
In future MySQL versions we will automatically add an ORDER BY
clause when needed.
The following problems are known and will be fixed in due time:
LIKE is not multi-byte character safe. Comparison is done
character by character.
RPAD function, or any other string function that ends
up adding blanks to the right, in a query that has to use temporary
table to be resolved, then all resulting strings will be RTRIM'ed. This
is an example of the query:
SELECT RPAD(t1.field1, 50, ' ') AS f2, RPAD(t2.field2, 50, '
') AS f1 FROM table1 as t1 LEFT JOIN table2 AS t2 ON
t1.record=t2.joinID ORDER BY t2.record;
Final result of this bug is that use will not be able to get blanks on
the right side of the resulting field.
The above behaviour exists in all versions of MySQL.
The reason for this is due to the fact that HEAP tables, which are used
first for temporary tables, are not capable of handling VARCHAR columns.
This behaviour will be fixed in one of the 4.1 series releases.
CHAR(255)) in table names, column names or enums.
This is scheduled to be fixed in version 5.1 when we have new table
definition format files.
SET CHARACTER SET, one can't use translated
characters in database, table, and column names.
_ or % with ESCAPE in LIKE
... ESCAPE.
DECIMAL column with a number stored in different
formats (+01.00, 1.00, 01.00), GROUP BY may regard each value
as a different value.
DELETE FROM merge_table used without a WHERE
will only clear the mapping for the table, not delete everything in the
mapped tables.
BLOB values can't ``reliably'' be used in GROUP BY or
ORDER BY or DISTINCT. Only the first max_sort_length
bytes (default 1024) are used when comparing BLOBs in these cases.
This can be changed with the -O max_sort_length option to
mysqld. A workaround for most cases is to use a substring:
SELECT DISTINCT LEFT(blob,2048) FROM tbl_name.
BIGINT or DOUBLE (both are
normally 64 bits long). It depends on the function which precision one
gets. The general rule is that bit functions are done with BIGINT
precision, IF, and ELT() with BIGINT or DOUBLE
precision and the rest with DOUBLE precision. One should try to
avoid using unsigned long long values if they resolve to be bigger than
63 bits (9223372036854775807) for anything else than bit fields.
MySQL Server 4.0 has better BIGINT handling than 3.23.
BLOB and TEXT columns, automatically
have all trailing spaces removed when retrieved. For CHAR types this
is okay, and may be regarded as a feature according to SQL-92. The bug is
that in MySQL Server, VARCHAR columns are treated the same way.
ENUM and SET columns in one table.
MIN(), MAX() and other aggregate functions, MySQL
currently compares ENUM and SET columns by their string
value rather than by the string's relative position in the set.
safe_mysqld redirects all messages from mysqld to the
mysqld log. One problem with this is that if you execute
mysqladmin refresh to close and reopen the log,
stdout and stderr are still redirected to the old log.
If you use --log extensively, you should edit safe_mysqld to
log to `'hostname'.err' instead of `'hostname'.log' so you can
easily reclaim the space for the old log by deleting the old one and
executing mysqladmin refresh.
UPDATE statement, columns are updated from left to right. If
you refer to an updated column, you will get the updated value instead of the
original value. For example:
mysql> UPDATE tbl_name SET KEY=KEY+1,KEY=KEY+1;This will update
KEY with 2 instead of with 1.
mysql> SELECT * FROM temporary_table, temporary_table AS t2;
RENAME doesn't work with TEMPORARY tables or tables used in a
MERGE table.
DISTINCT differently if you are using
'hidden' columns in a join or not. In a join, hidden columns are
counted as part of the result (even if they are not shown) while in
normal queries hidden columns don't participate in the DISTINCT
comparison. We will probably change this in the future to never compare
the hidden columns when executing DISTINCT.
An example of this is:
SELECT DISTINCT mp3id FROM band_downloads
WHERE userid = 9 ORDER BY id DESC;
and
SELECT DISTINCT band_downloads.mp3id
FROM band_downloads,band_mp3
WHERE band_downloads.userid = 9
AND band_mp3.id = band_downloads.mp3id
ORDER BY band_downloads.id DESC;
In the second case you may in MySQL Server 3.23.x get two identical rows
in the result set (because the hidden id column may differ).
Note that this happens only for queries where you don't have the
ORDER BY columns in the result, something that you are not allowed
to do in SQL-92.
rollback data, some things
behave a little differently in MySQL Server than in other SQL servers.
This is just to ensure that MySQL Server never needs to do a rollback
for a SQL command. This may be a little awkward at times as column
values must be checked in the application, but this will actually give
you a nice speed increase as it allows MySQL Server to do some
optimisations that otherwise would be very hard to do.
If you set a column to an incorrect value, MySQL Server will, instead of
doing a rollback, store the best possible value in the column:
NULL into a column that doesn't take
NULL values, MySQL Server will store 0 or '' (empty
string) in it instead. (This behaviour can, however, be changed with the
-DDONT_USE_DEFAULT_FIELDS compile option.)
DATE and
DATETIME columns (like 2000-02-31 or 2000-02-00). The idea is
that it's not the SQL server job to validate date. If MySQL can store a
date and retrieve exactly the same date, then MySQL will store the
date. If the date is totally wrong (outside the server's ability to store
it), then the special date value 0000-00-00 will be stored in the column.
ENUM column to an unsupported value, it will be set to
the error value empty string, with numeric value 0.
SET column to an unsupported value, the value will
be ignored.
PROCEDURE on a query that returns an empty set,
in some cases the PROCEDURE will not transform the columns.
MERGE doesn't check if the underlying
tables are of compatible types.
NaN, -Inf, and Inf
values in double. Using these will cause problems when trying to export
and import data. We should as an intermediate solution change NaN to
NULL (if possible) and -Inf and Inf to the
minimum respective maximum possible double value.
LIMIT on negative numbers are treated as big positive numbers.
ALTER TABLE to first add a UNIQUE index to a
table used in a MERGE table and then use ALTER TABLE to
add a normal index on the MERGE table, the key order will be
different for the tables if there was an old key that was not unique in the
table. This is because ALTER TABLE puts UNIQUE keys before
normal keys to be able to detect duplicate keys as early as possible.
The following are known bugs in earlier versions of MySQL:
DROP TABLE on a table that is
one among many tables that is locked with LOCK TABLES.
LOCK table with WRITE.
FLUSH TABLES.
UPDATE that updated a key with
a WHERE on the same key may have failed because the key was used to
search for records and the same row may have been found multiple times:
UPDATE tbl_name SET KEY=KEY+1 WHERE KEY > 100;A workaround is to use:
mysql> UPDATE tbl_name SET KEY=KEY+1 WHERE KEY+0 > 100;This will work because MySQL Server will not use an index on expressions in the
WHERE clause.
For platform-specific bugs, see the sections about compiling and porting.
This section lists the features that we plan to implement in MySQL Server.
Everything in this list is approximately in the order it will be done. If you want to affect the priority order, please register a license or support us and tell us what you want to have done more quickly. See section 1.4 MySQL Support and Licensing.
The plan is that, in the future, we will support the full SQL-99 standard, but with a lot of useful extensions. The challenge is to do this without sacrificing the speed or compromising the code.
The features below are not yet implemented in MySQL 4.1, but are planned for implementation before MySQL 4.1 moves into its beta phase. For a list what is already done in MySQL 4.1, see section 1.6.1 Features Available From MySQL 4.1.
Development of other things has already shifted to the 5.0 tree.
The following features are planned for inclusion into MySQL 5.0. Note that because we have many developers that are working on different projects, there will also be many additional features. There is also a small chance that some of these features will be added to MySQL 4.1. For a list what is already done in MySQL 4.1, see section 1.6.1 Features Available From MySQL 4.1.
For those wishing to take a look at the bleeding edge of MySQL development, we have already made our BitKeeper repository for MySQL version 5.0 publically available. See section 2.3.4 Installing from the Development Source Tree.
RTREE index for MyISAM tables. In 4.1 RTREE indexes are
used internally for geometrical data, but not directly usable.
VARCHAR support (there is already support for this in
MyISAM).
SHOW COLUMNS FROM table_name (used by mysql client to allow
expansions of column names) should not open the table, only the
definition file. This will require less memory and be much faster.
DELETE on MyISAM tables to use the record cache.
To do this, we need to update the threads record cache when we update
the `.MYD' file.
HEAP) tables:
SET CHARACTER SET we should translate the whole query
at once and not only strings. This will enable users to use the translated
characters in database, table, and column names.
RENAME TABLE on a table used in an active
MERGE table possibly corrupting the table.
FOREIGN KEY support for all table types.
BIT type to take 1 bit (now BIT takes 1 char).
RENAME DATABASE. To make this safe for all storage engines,
it should work as follows:
RENAME command.
CONNECT BY PRIOR ... to search tree-like (hierarchical)
structures.
SUM(DISTINCT).
INSERT SQL_CONCURRENT and mysqld --concurrent-insert to do
a concurrent insert at the end of the file if the file is read-locked.
UPDATE statements. For example:
UPDATE TABLE foo SET @a=a+b,a=@a, b=@a+c.
GROUP BY, as in the following example:
SELECT id, @a:=COUNT(*), SUM(sum_col)/@a FROM table_name GROUP BY id.
IMAGE option to LOAD DATA INFILE to not update
TIMESTAMP and AUTO_INCREMENT fields.
LOAD DATA INFILE ... UPDATE syntax.
LOAD DATA INFILE ... REPLACE INTO now.
LOAD DATA INFILE understand syntax like:
LOAD DATA INFILE 'file_name.txt' INTO TABLE tbl_name
TEXT_FIELDS (text_field1, text_field2, text_field3)
SET table_field1=CONCAT(text_field1, text_field2),
table_field3=23
IGNORE text_field3
This can be used to skip over extra columns in the text file,
or update columns based on expressions of the read data.
SET type columns:
ADD_TO_SET(value,set)
REMOVE_FROM_SET(value,set)
mysql in the middle of a query, you should open
another connection and kill the old running query.
Alternatively, an attempt should be made to detect this in the server.
SHOW INFO FROM tbl_name for basic table information
should be implemented.
SELECT a FROM crash_me LEFT JOIN crash_me2 USING (a); in this
case a is assumed to come from the crash_me table.
DELETE and REPLACE options to the UPDATE statement
(this will delete rows when one gets a duplicate key error while updating).
DATETIME to store fractions of seconds.
DEFAULT values to columns. Give an error
when using an INSERT that doesn't contain a column that doesn't
have a DEFAULT.
ANY(), EVERY(), and SOME() group functions. In
standard SQL these work only on boolean columns, but we can extend these to
work on any columns/expressions by applying: value == 0 -> FALSE and
value <> 0 -> TRUE.
MAX(column) is the same as the column type:
mysql> CREATE TABLE t1 (a DATE); mysql> INSERT INTO t1 VALUES (NOW()); mysql> CREATE TABLE t2 SELECT MAX(a) FROM t1; mysql> SHOW COLUMNS FROM t2;
INSERT ... SELECT to optionally use concurrent inserts.
pread()/pwrite() on Windows to enable
concurrent inserts.
SELECT MIN(column) ... GROUP BY.
long_query_time with a granularity
in microseconds.
myisampack code into the server, enabling a PACK or
COMPRESS command on the server.
INSERT/DELETE/UPDATE so that we
can gracefully recover if the index file gets full.
ALTER TABLE on a table that is symlinked to another
disk, create temporary tables on this disk.
DATE/DATETIME type that handles time zone information
properly so that dealing with dates in different time zones is easier.
MyISAM)
without threads.
LIMIT, like in LIMIT @a,@b.
mysql to a web browser.
LOCK DATABASES (with various options).
SHOW STATUS. Records reads and
updates. Selects on 1 table and selects with joins. Mean number of
tables in select. Number of ORDER BY and GROUP BY queries.
mysqladmin copy database new-database; requires COPY
command to be added to mysqld.
SHOW HOSTS for printing information about the hostname cache.
NULL for calculated columns.
Item_copy_string on numerical values to avoid
number->string->number conversion in case of:
SELECT COUNT(*)*(id+0) FROM table_name GROUP BY id
ALTER TABLE doesn't abort clients
that execute INSERT DELAYED.
UPDATE clause,
they contain the old values from before the update started.
get_changed_tables(timeout,table1,table2,...).
SET TIMESTAMP=#;.
MINUS, INTERSECT, and FULL OUTER JOIN.
(Currently UNION [in 4.0] and LEFT|RIGHT OUTER JOIN are supported.)
SQL_OPTION MAX_SELECT_TIME=# to put a time limit on a query.
LIMIT to allow retrieval of data from the end of a result set.
safe_mysqld: according to FSSTND (which
Debian tries to follow) PID files should go into `/var/run/<progname>.pid'
and log files into `/var/log'. It would be nice if you could put the
"DATADIR" in the first declaration of "pidfile" and "log", so the
placement of these files can be changed with a single statement.
zlib() for gzip-ed files to LOAD DATA INFILE.
BLOB columns (partly solved now).
AUTO_INCREMENT value when one sets a column to 0.
Use NULL instead.
JOIN with parentheses.
GET_LOCK. When doing this,
one must also handle the possible deadlocks this change will introduce.
Time is given according to amount of work, not real time.
Our users have successfully run their own benchmarks against a number
of Open Source and traditional database servers.
We are aware of tests against Oracle server, DB/2 server,
Microsoft SQL Server, and other commercial products.
Due to legal reasons we are restricted from publishing some of those
benchmarks in our reference manual.
This section includes a comparison with mSQL for historical
reasons and with PostgreSQL as it is also an Open Source
database. If you have benchmark results that we can publish, please
contact us at benchmarks@mysql.com.
For comparative lists of all supported functions and types as well
as measured operational limits of many different database systems,
see the crash-me web page at
http://www.mysql.com/information/crash-me.php.
mSQLmSQL should be quicker at:
INSERT operations into very simple tables with few columns and keys.
CREATE TABLE and DROP TABLE.
SELECT on something that isn't indexed. (A table scan is very
easy.)
mSQL (and
most other SQL implementations) on the following:
SELECT operations.
VARCHAR columns.
SELECT with many expressions.
SELECT on large tables.
mSQL, once one
connection is established, all others must wait until the first has
finished, regardless of whether the connection is running a query
that is short or long. When the first connection terminates, the
next can be served, while all the others wait again, etc.
mSQL can become pathologically slow if you change the order of
tables in a SELECT. In the benchmark suite, a time more than
15,000 times slower than MySQL Server was seen. This is due to mSQL's
lack of a join optimiser to order tables in the optimal order.
However, if you put the tables in exactly the right order in
mSQL2 and the WHERE is simple and uses index columns,
the join will be relatively fast.
See section 5.1.4 The MySQL Benchmark Suite.
ORDER BY and GROUP BY.
DISTINCT.
TEXT or BLOB columns.
GROUP BY and HAVING.
mSQL does not support GROUP BY at all.
MySQL Server supports a full GROUP BY with both HAVING and
the following functions: COUNT(), AVG(), MIN(),
MAX(), SUM(), and STD(). COUNT(*) is
optimised to return very quickly if the SELECT retrieves from
one table, no other columns are retrieved, and there is no
WHERE clause. MIN() and MAX() may take string
arguments.
INSERT and UPDATE with calculations.
MySQL Server can do calculations in an INSERT or UPDATE.
For example:
mysql> UPDATE SET x=x*10+y WHERE x<20;
SELECT with functions.
MySQL Server has many functions (too many to list here; see section 6.3 Functions for Use in SELECT and WHERE Clauses).
MEDIUMINT that is 3 bytes long. If you have 100 million
records, saving even 1 byte per record is very important.
mSQL2 has a more limited set of column types, so it is
more difficult to get small tables.
mSQL stability, so we cannot say
anything about that.
mSQL, and is also less expensive
than mSQL. Whichever product you choose to use, remember to
at least consider paying for a license or e-mail support.
mSQL with
some added features.
GPL and commercial).
mSQL has a JDBC driver, but we have too little
experience with it to compare.
GROUP BY, and so on are still not implemented in mSQL, it
has a lot of catching up to do. To get some perspective on this, you
can view the mSQL `HISTORY' file for the last year and
compare it with the News section of the MySQL Reference Manual
(see section D MySQL Change History). It should be pretty obvious which one has developed
most rapidly.
mSQL and MySQL Server have many interesting third-party
tools. Because it is very easy to port upward (from mSQL to
MySQL Server), almost all the interesting applications that are available for
mSQL are also available for MySQL Server.
MySQL Server comes with a simple msql2mysql program that fixes
differences in spelling between mSQL and MySQL Server for the
most-used C API functions.
For example, it changes instances of msqlConnect() to
mysql_connect(). Converting a client program from mSQL to
MySQL Server usually requires only minor effort.
mSQL Tools for MySQL
According to our experience, it doesn't take long to convert tools
such as msql-tcl and msqljava that use the
mSQL C API so that they work with the MySQL C API.
The conversion procedure is:
msql2mysql on the source. This requires
the replace program, which is distributed with MySQL Server.
Differences between the mSQL C API and the MySQL C API are:
MYSQL structure as a connection type (mSQL
uses an int).
mysql_connect() takes a pointer to a MYSQL structure as a
parameter. It is easy to define one globally or to use malloc()
to get one. mysql_connect() also takes two parameters for
specifying the user and password. You may set these to
NULL, NULL for default use.
mysql_error() takes the MYSQL structure as a parameter.
Just add the parameter to your old msql_error() code if you are
porting old code.
mSQL returns only a text error message.
mSQL and MySQL Client/Server Communications Protocols DifferThere are enough differences that it is impossible (or at least not easy) to support both.
The most significant ways in which the MySQL protocol differs
from the mSQL protocol are listed here:
mSQL 2.0 SQL Syntax Differs from MySQLColumn types
MySQL Server
CREATE TABLE Syntax):
ENUM type for one of a set of strings.
SET type for many of a set of strings.
BIGINT type for 64-bit integers.
UNSIGNED option for integer and floating-point columns.
ZEROFILL option for integer columns.
AUTO_INCREMENT option for integer columns that are a
PRIMARY KEY.
See section 8.1.3.130 mysql_insert_id().
DEFAULT value for all columns.
mSQL2
mSQL column types correspond to the MySQL types shown in the following table:
mSQL type | Corresponding MySQL type |
CHAR(len) | CHAR(len)
|
TEXT(len) | TEXT(len). len is the maximal length.
And LIKE works.
|
INT | INT. With many more options.
|
REAL | REAL. Or FLOAT. Both 4- and 8-byte versions are available.
|
UINT | INT UNSIGNED
|
DATE | DATE. Uses SQL-99 format rather than mSQL's own format.
|
TIME | TIME
|
MONEY | DECIMAL(12,2). A fixed-point value with two decimals.
|
Index Creation
MySQL Server
CREATE TABLE
statement.
mSQL
CREATE INDEX statements.
To Insert a Unique Identifier into a Table
MySQL Server
AUTO_INCREMENT as a column type
specifier.
See section 8.1.3.130 mysql_insert_id().
mSQL
SEQUENCE on a table and select the _seq column.
To Obtain a Unique Identifier for a Row
MySQL Server
PRIMARY KEY or UNIQUE key to the table and use this.
New in Version 3.23.11: If the PRIMARY or UNIQUE key consists of only one
column and this is of type integer, one can also refer to it as
_rowid.
mSQL
_rowid column. Observe that _rowid may change over time
depending on many factors.
To Get the Time a Column Was Last Modified
MySQL Server
TIMESTAMP column to the table. This column is automatically set
to the current date and time for INSERT or UPDATE statements if
you don't give the column a value or if you give it a NULL value.
mSQL
_timestamp column.
NULL Value Comparisons
MySQL Server
NULL is always UNKNOWN.
mSQL
mSQL, NULL = NULL is TRUE. You
must change =NULL to IS NULL and <>NULL to
IS NOT NULL when porting old code from mSQL to MySQL Server.
String Comparisons
MySQL Server
BINARY attribute, which causes comparisons to be done according to the
ASCII order used on the MySQL server host.
mSQL
Case-insensitive Searching
MySQL Server
LIKE is a case-insensitive or case-sensitive operator, depending on
the columns involved. If possible, MySQL uses indexes if the
LIKE argument doesn't start with a wildcard character.
mSQL
CLIKE.
Handling of Trailing Spaces
MySQL Server
CHAR and VARCHAR
columns. Use a TEXT column if this behaviour is not desired.
mSQL
WHERE Clauses
MySQL Server
AND is evaluated
before OR). To get mSQL behaviour in MySQL Server, use
parentheses (as shown in an example later in this section).
mSQL
mSQL query:
mysql> SELECT * FROM table WHERE a=1 AND b=2 OR a=3 AND b=4;To make MySQL Server evaluate this the way that
mSQL would,
you must add parentheses:
mysql> SELECT * FROM table WHERE (a=1 AND (b=2 OR (a=3 AND (b=4))));
Access Control
MySQL Server
mSQL
PostgreSQLWhen reading the following, please note that both products are continually evolving. MySQL AB's and PostgreSQL's developers are both working on making our respective databases as good as possible. Both products are thus a serious alternative to any commercial database.
The following comparison is made by us at MySQL AB. We have tried to be as accurate and fair as possible, but although we know MySQL Server thoroughly, we don't have a full knowledge of all PostgreSQL features, so we may have got some things wrong. We will, however, correct these when they come to our attention.
We would first like to note that PostgreSQL and MySQL Server are both widely used
products, but with different design goals, even if we are both striving
toward SQL standard compliance. This means that for some applications MySQL Server
is more suited, while for others PostgreSQL is more suited. When choosing
which database to use, you should first check if the database's feature set
satisfies your application. If you need raw speed, MySQL Server is probably your
best choice. If you need some of the extra features that only PostgreSQL
can offer, you should use PostgreSQL.
When adding things to MySQL Server we take pride to do an optimal, definite solution. The code should be so good that we shouldn't have any need to change it in the foreseeable future. We also do not like to sacrifice speed for features but instead will do our utmost to find a solution that will give maximal throughput. This means that development will take a little longer, but the end result will be well worth this. This kind of development is only possible because all server code is checked by one of a few (currently two) persons before it's included in the MySQL server.
We at MySQL AB believe in frequent releases to be able to push out new features quickly to our users. Because of this we do a new small release about every three weeks, and a major branch every year. All releases are thoroughly tested with our testing tools on a lot of different platforms.
PostgreSQL is based on a kernel with lots of contributors. In this setup it makes sense to prioritise adding a lot of new features, instead of implementing them optimally, because one can always optimise things later if there arises a need for this.
Another big difference between MySQL Server and PostgreSQL is that nearly all of the code in the MySQL server is coded by developers that are employed by MySQL AB and are still working on the server code. The exceptions are the transaction engines and the regexp library.
This is in sharp contrast to the PostgreSQL code, the majority of which is coded by a big group of people with different backgrounds. It was only recently that the PostgreSQL developers announced that their current developer group had finally had time to take a look at all the code in the current PostgreSQL release.
Both of the aforementioned development methods have their own merits and drawbacks. We here at MySQL AB think, of course, that our model is better because our model gives better code consistency, more optimal and reusable code, and in our opinion, fewer bugs. Because we are the authors of the MySQL server code, we are better able to coordinate new features and releases.
On the crash-me page
(http://www.mysql.com/information/crash-me.php)
you can find a list of those database constructs and limits that
one can detect automatically with a program. Note, however, that a lot of
the numerical limits may be changed with startup options for their respective
databases. This web page is, however, extremely useful when you want to
ensure that your applications work with many different databases or
when you want to convert your application from one database to another.
MySQL Server offers the following advantages over PostgreSQL:
MySQL Server is generally much faster than PostgreSQL. MySQL
4.0.1 also has a query cache that can boost up the query speed for
mostly-read-only sites many times.
Cygwin emulation. We have
heard that PostgreSQL is not yet that stable on Windows but we haven't
been able to verify this ourselves.
VACUUM
once in a while to reclaim space from UPDATE and DELETE
commands and to perform statistics analyses that are critical to get
good performance with PostgreSQL. VACUUM is also needed after
adding a lot of new rows to a table. On a busy system with lots of changes,
VACUUM must be run very frequently, in the worst cases even
many times a day. During the VACUUM run, which may take hours
if the database is big, the database is, from a production standpoint,
practically dead. Please note: in PostgreSQL version 7.2, basic vacuuming
no longer locks tables, thus allowing normal user access during the vacuum.
A new VACUUM FULL command does old-style vacuum by locking the table
and shrinking the on-disk copy of the table.
crash-me
(http://www.mysql.com/information/crash-me.php), as well
as a benchmark suite. The test system is actively updated with code to
test each new feature and almost all reproducible bugs that have come to
our attention. We test MySQL Server with these on a lot of platforms before
every release. These tests are more sophisticated than anything we have
seen from PostgreSQL, and they ensure that the MySQL Server is kept to a high
standard.
PostgreSQL.
ALTER TABLE.
HEAP
tables or disk based MyISAM. See section 7 MySQL Table Types.
InnoDB, and BerkeleyDB. Because every
transaction engine performs differently under different conditions, this
gives the application writer more options to find an optimal solution for
his or her setup, if need be per individual table. See section 7 MySQL Table Types.
MERGE tables gives you a unique way to instantly make a view over
a set of identical tables and use these as one. This is perfect for
systems where you have log files that you order, for example, by month.
See section 7.2 MERGE Tables.
myisampack, The MySQL Compressed Read-only Table Generator.
INSERT,
SELECT, and UPDATE/DELETE grants per user on a database or
a table, MySQL Server allows you to define a full set of different
privileges on the database, table, and column level. MySQL Server also
allows you to specify the privilege on host and user combinations.
See section 4.3.1 GRANT and REVOKE Syntax.
InnoDB) are implemented as files
(one table per file), which makes it really easy to back up, move, delete,
and even symlink databases and tables, even when the server is down.
MyISAM tables (the most common
MySQL table type). A repair tool is only needed when a physical corruption
of a datafile happens, usually from a hardware failure. It allows a
majority of the data to be recovered.
Drawbacks with MySQL Server compared to PostgreSQL:
MyISAM tables, is
in many cases faster than page locks, row locks, or versioning. The
drawback, however, is that if one doesn't take into account how table
locks work, a single long-running query can block a table for updates
for a long time. This can usually be avoided when designing the
application. If not, one can always switch the trouble table to use one
of the transactional table types. See section 5.3.2 Table Locking Issues.
UPDATE and in MySQL Server 4.1 with subqueries.
In MySQL Server 4.0 one can use multi-table deletes to delete from many
tables at the same time. See section 6.4.6 DELETE Syntax.
PostgreSQL currently offers the following advantages over MySQL Server:
Note that because we know the MySQL road map, we have included in the following table the version when MySQL Server should support this feature. Unfortunately we couldn't do this for previous comparisons, because we don't know the PostgreSQL roadmap.
| Feature | MySQL version |
| Subqueries | 4.1 |
| Foreign keys | 5.1 (3.23 with InnoDB) |
| Views | 5.1 |
| Stored procedures | 5.0 |
| Triggers | 5.1 |
| Unions | 4.0 |
| Full outer join | 5.1 |
| Constraints | 5.1 |
| Cursors | 5.0 |
| R-trees | 4.1 (for MyISAM tables) |
| Inherited tables | Not planned |
| Extensible type system | Not planned |
Other reasons someone may consider using PostgreSQL:
Drawbacks with PostgreSQL compared to MySQL Server:
VACUUM makes PostgreSQL hard to use in a 24/7 environment.
INSERT, DELETE, and UPDATE.
For a complete list of drawbacks, you should also examine the first table in this section.
The only Open Source benchmark that we know of that can be used to
benchmark MySQL Server and PostgreSQL (and other databases) is our own. It can
be found at http://www.mysql.com/information/benchmarks.html.
We have many times asked the PostgreSQL developers and some PostgreSQL users to help us extend this benchmark to make it the definitive benchmark for databases, but unfortunately we haven't gotten any feedback for this.
We, the MySQL developers, have, because of this, spent a lot of hours to get maximum performance from PostgreSQL for the benchmarks, but because we don't know PostgreSQL intimately, we are sure that there are things that we have missed. We have on the benchmark page documented exactly how we did run the benchmark so that it should be easy for anyone to repeat and verify our results.
The benchmarks are usually run with and without the --fast option.
When run with --fast we are trying to use every trick the server can
do to get the code to execute as fast as possible. The idea is that the
normal run should show how the server would work in a default setup and
the --fast run shows how the server would do if the application
developer would use extensions in the server to make his application run
faster.
When running with PostgreSQL and --fast we do a VACUUM
after every major table UPDATE and DROP TABLE to make the
database in perfect shape for the following SELECTs. The time for
VACUUM is measured separately.
When running with PostgreSQL 7.1.1 we could, however, not run with
--fast because during the INSERT test, the postmaster (the
PostgreSQL daemon) died and the database was so corrupted that it was
impossible to restart postmaster. After this happened twice, we decided
to postpone the --fast test until the next PostgreSQL release. The
details about the machine we run the benchmark on can be found on the
benchmark page.
Before going to the other benchmarks we know of, we would like to give some background on benchmarks.
It's very easy to write a test that shows any database to be the best database in the world, by just restricting the test to something the database is very good at and not testing anything that the database is not good at. If one, after doing this, summarises the result as a single figure, things are even easier.
This would be like us measuring the speed of MySQL Server compared to PostgreSQL by looking at the summary time of the MySQL benchmarks on our web page. Based on this MySQL Server would be more than 40 times faster than PostgreSQL, something that is, of course, not true. We could make things even worse by just taking the test where PostgreSQL performs worst and claim that MySQL Server is more than 2000 times faster than PostgreSQL.
The case is that MySQL does a lot of optimisations that PostgreSQL doesn't do. This is, of course, also true the other way around. An SQL optimiser is a very complex thing, and a company could spend years just making the optimiser faster and faster.
When looking at the benchmark results you should look for things that you do in your application and just use these results to decide which database would be best suited for your application. The benchmark results also show things a particular database is not good at and should give you a notion about things to avoid and what you may have to do in other ways.
We know of two benchmark tests that claim that PostgreSQL performs better than MySQL Server. These are both multi-user tests, a test that we here at MySQL AB haven't had time to write and include in the benchmark suite, mainly because it's a big task to do this in a manner that is fair to all databases.
One is the benchmark paid for by Great Bridge, the company that for 16 months attempted to build a business based on PostgreSQL but now has ceased operations. This is probably the worst benchmark we have ever seen anyone conduct. This was not only tuned to only test what PostgreSQL is absolutely best at, but it was also totally unfair to every other database involved in the test.
Note: We know that even some of the main PostgreSQL developers did not like the way Great Bridge conducted the benchmark, so we don't blame the PostgreSQL team for the way the benchmark was done.
This benchmark has been condemned in a lot of postings and newsgroups, so here we will just briefly repeat some things that were wrong with it.
Open Source company like us to verify the benchmarks,
or even check how the benchmarks were really done. The tool is not even
a true benchmark tool, but an application/setup testing tool. To refer
to this as a ``standard'' benchmark tool is to stretch the truth a long way.
VACUUM before the test) and tuned the startup for the tests,
something they hadn't done for any of the other databases involved. They
say ``This process optimises indexes and frees up disk space a bit. The
optimised indexes boost performance by some margin.'' Our benchmarks
clearly indicate that the difference in running a lot of selects on a
database with and without VACUUM can easily differ by a factor
of 10.
SELECTs and JOINs (especially
after a VACUUM), but doesn't perform as well on INSERTs or
UPDATEs. The benchmarks seem to indicate that only SELECTs
were done (or very few updates). This could easily explain the good results
for PostgreSQL in this test. The bad results for MySQL will be obvious a
bit down in this document.
Tim Perdue, a long-time PostgreSQL fan and a reluctant MySQL user, published a comparison on PHPbuilder (http://www.phpbuilder.com/columns/tim20001112.php3).
When we became aware of the comparison, we phoned Tim Perdue about this because there were a lot of strange things in his results. For example, he claimed that MySQL Server had a problem with five users in his tests, when we know that there are users with similar machines running MySQL Server with 2000 simultaneous connections doing 400 queries per second. (In this case the limit was the web bandwidth, not the database.)
It sounded like he was using a Linux kernel that had some problems with many threads, such as kernels before 2.4, which had a problem with many threads on multi-CPU machines. This manual describes the fix for this and Tim should be aware of this problem.
The other possible problem could have been an old glibc library and that Tim didn't use a MySQL binary from our site, which is linked with a corrected glibc library, but had compiled a version of his own. In any of these cases, the symptom would have been exactly what Tim had measured.
We asked Tim if we could get access to his data so that we could repeat the benchmark and if he could check the MySQL version on the machine to find out what was wrong and he promised to come back to us about this. He has not done that yet.
Because of this we can't put any trust in this benchmark either.
Over time things also change and the preceding benchmarks are no longer very relevant. MySQL Server now has a couple of different storage engines with different speed/concurrency tradeoffs. See section 7 MySQL Table Types. It would be interesting to see how the above tests would run with the different transactional table types in MySQL Server. PostgreSQL has, of course, also got new features since the test was made. As these tests are not publicly available there is no way for us to know how the database would perform in the same tests today.
Conclusion:
The only benchmarks that exist today that anyone can download and run
against MySQL Server and PostgreSQL are the MySQL benchmarks.
We here at MySQL AB
believe that Open Source databases should be tested with Open Source tools.
This is the only way to ensure that no one does tests that nobody can
reproduce and use this to claim that one database is better than another.
Without knowing all the facts it's impossible to answer the claims of the
tester.
The thing we find strange is that every test we have seen about PostgreSQL, that is impossible to reproduce, claims that PostgreSQL is better in most cases while our tests, which anyone can reproduce, clearly show otherwise. With this we don't want to say that PostgreSQL isn't good at many things (it is!) or that it isn't faster than MySQL Server under certain conditions. We would just like to see a fair test where PostgreSQL performs very well, so that we could get some friendly competition going.
For more information about our benchmark suite, see section 5.1.4 The MySQL Benchmark Suite.
We are working on an even better benchmark suite, including multi-user tests, and a better documentation of what the individual tests really do and how to add more tests to the suite.
This chapter describes how to obtain and install MySQL:
The recommended way to install MySQL on Linux is by using the RPM
packages. The MySQL RPMs are currently built on a SuSE Linux 7.3
system but should work on most versions of Linux that support rpm
and use glibc.
If you have problems with an RPM file, for example, if you receive the error
``Sorry, the host 'xxxx' could not be looked up''@-see
section 2.6.1.1 Linux Notes for Binary Distributions.
In most cases, you only need to install the MySQL-server and
MySQL-client packages to get a functional MySQL installation. The
other packages are not required for a standard installation.
If you get a dependency failure when trying to install the MySQL 4.0
packages (e.g. ``error: removing these packages would break dependencies:
libmysqlclient.so.10 is needed by ...''), you should also install the package
MySQL-shared-compat, which includes both the shared libraries for
MySQL 4.0 (libmysqlclient.so.12 and 3.23 libmysqlclient.so.10
for backwards compatibility.
Many Linux distributions still ship with MySQL 3.23 and they usually link applications dynamically to save disk space. If these shared libraries are in a separate package (e.g. MySQL-shared), it is sufficient to simply leave this package installed und just upgrade the MySQL server and client packages (which are statically linked and do not depend on the shared libraries). For distributions that include the shared libraries in the same package as the MySQL server (e.g. Red Hat Linux), you could either install our 3.23 MySQL-shared RPM, or use the MySQL-shared-compat package instead.
The following RPM packages are available:
MySQL-server-VERSION.i386.rpm
The MySQL server. You will need this unless you only want to
connect to a MySQL server running on another machine. Please note
that this package was called MySQL-VERSION.i386.rpm before
MySQL 4.0.10.
MySQL-client-VERSION.i386.rpm
The standard MySQL client programs. You probably always want to
install this package.
MySQL-bench-VERSION.i386.rpm
Tests and benchmarks. Requires Perl and the DBD-mysql module.
MySQL-devel-VERSION.i386.rpm
Libraries and include files needed if you want to compile other
MySQL clients, such as the Perl modules.
MySQL-shared-VERSION.i386.rpm
This package contains the shared libraries (libmysqlclient.so*)
which certain languages and applications need to dynamically load and
use MySQL.
MySQL-shared-compat-VERSION.i386.rpm
This package includes the shared libraries for both MySQL 3.23 and
MySQL 4.0. Install this package instead of MySQL-shared, if you
have applications installed that are dynamically linked against MySQL
3.23 but you want to upgrade to MySQL 4.0 without breaking the library
dependencies. This package is available since MySQL 4.0.13.
MySQL-embedded-VERSION.i386.rpm
The embedded MySQL server library (from MySQL 4.0).
MySQL-VERSION.src.rpm
This contains the source code for all of the previous packages. It can also
be used to rebuild the RPMs on other architectures (for example, Alpha or SPARC).
To see all files in an RPM package, run:
shell> rpm -qpl MySQL-VERSION.i386.rpm
To perform a standard minimal installation, run:
shell> rpm -i MySQL-server-VERSION.i386.rpm MySQL-client-VERSION.i386.rpm
To install just the client package, run:
shell> rpm -i MySQL-client-VERSION.i386.rpm
The RPM places data in `/var/lib/mysql'. The RPM also creates the appropriate entries in `/etc/init.d/' to start the server automatically at boot time. (This means that if you have performed a previous installation, you may want to make a copy of your previously installed MySQL startup file if you made any changes to it, so you don't lose your changes.)
If you want to install the MySQL RPM on older Linux distributions that do not support init scripts in `/etc/init.d' (directly or via a symlink), you should create a symbolic link pointing to the old location before installing the RPM:
shell> cd /etc ; ln -s rc.d/init.d .
However, all current major Linux distributions should already support this new directory layout as it is required for LSB (Linux Standard Base) compliance.
After installing the RPM file(s), the mysqld daemon should be up and
running and you should now be able to start using MySQL.
See section 2.4 Post-installation Setup and Testing.
If something goes wrong, you can find more information in the binary installation chapter. See section 2.2.11 Installing a MySQL Binary Distribution.
The MySQL server for Windows is available in two distribution types:
Generally speaking, you should use the binary distribution.
You will need the following:
MAX_ROWS and
AVG_ROW_LENGTH when you create the table. See section 6.5.3 CREATE TABLE Syntax.
ZIP program to unpack the distribution file.
ODBC, you
will also need the MyODBC driver. See section 8.2 MySQL ODBC Support.
C:\> NET STOP MySQLOtherwise, use:
C:\mysql\bin> mysqladmin -u root shutdown
C:\mysql\bin> mysqld --remove
Browse button to specify your
preferred directory.
Starting with MySQL 3.23.38, the Windows distribution includes both the normal and the MySQL-Max server binaries. Here is a list of the different MySQL servers you can use:
| Binary | Description |
mysqld | Compiled with full debugging and automatic memory allocation checking, symbolic links, InnoDB, and BDB tables. |
mysqld-opt | Optimised binary with no support for transactional tables in version 3.23. For version 4.0, InnoDB is enabled. |
mysqld-nt | Optimised binary for NT/2000/XP with support for named pipes. You can run this version on Windows 9x/Me, but in this case no named pipes are created and you must have TCP/IP installed. |
mysqld-max | Optimised binary with support for symbolic links, InnoDB and BDB tables. |
mysqld-max-nt |
Like mysqld-max, but compiled with support for named pipes.
|
Starting from 3.23.50, named pipes are only enabled if one starts mysqld with
--enable-named-pipe.
All of the preceding binaries are optimised for the Pentium Pro processor but should work on any Intel processor >= i386.
You will need to use an option file to specify your MySQL configuration under the following circumstances:
Normally you can use the WinMySQLAdmin tool to edit the
option file my.ini. In this case you don't have to worry
about the following section.
There are two option files with the same function: `my.cnf' and
`my.ini'. However, to avoid confusion, it's best if you use only
of one them. Both files are plain text. The `my.cnf' file, if used,
should be created in the root directory of the C drive. The `my.ini'
file, if used, should be created in the Windows system directory. (This
directory is typically something like `C:\WINDOWS' or `C:\WINNT'.
You can determine its exact location from the value of the windir
environment variable.) MySQL looks first for the my.ini file,
then for the `my.cnf' file.
If your PC uses a boot loader where the C drive isn't the boot drive,
your only option is to use the `my.ini' file. Also note that
if you use the WinMySQLAdmin tool, it uses only the `my.ini'
file. The `\mysql\bin' directory contains a help file with
instructions for using this tool.
Using notepad.exe, create the option file and edit the
[mysqld] section to specify values for the basedir and
datadir parameters:
[mysqld] # set basedir to installation path, e.g., c:/mysql basedir=the_install_path # set datadir to location of data directory, # e.g., c:/mysql/data or d:/mydata/data datadir=the_data_path
Note that Windows pathnames should be specified in option files using forward slashes rather than backslashes. If you do use backslashes, you must double them.
If you would like to use a data directory different from the default of `c:\mysql\data', you must copy the entire contents of the `c:\mysql\data' directory to the new location.
If you want to use the InnoDB transactional tables in
MySQL version 3.23, you
need to manually create two new directories to hold the InnoDB
data and log files@-e.g., `c:\ibdata' and `c:\iblogs'.
You will also need to add some extra lines to the option
file. See section 7.5.3 InnoDB Startup Options.
Now you are ready to test starting the server.
Testing from a DOS command prompt is the best thing to do because the server displays status messages that appear in the DOS window. If something is wrong with your configuration, these messages will make it easier for you to identify and fix any problems.
Make sure you are in the directory where the server is located, then enter this command:
C:\mysql\bin> mysqld --standalone
You should see the following messages as the server starts up:
InnoDB: The first specified datafile c:\ibdata\ibdata1 did not exist: InnoDB: a new database to be created! InnoDB: Setting file c:\ibdata\ibdata1 size to 209715200 InnoDB: Database physically writes the file full: wait... InnoDB: Log file c:\iblogs\ib_logfile0 did not exist: new to be created InnoDB: Setting log file c:\iblogs\ib_logfile0 size to 31457280 InnoDB: Log file c:\iblogs\ib_logfile1 did not exist: new to be created InnoDB: Setting log file c:\iblogs\ib_logfile1 size to 31457280 InnoDB: Log file c:\iblogs\ib_logfile2 did not exist: new to be created InnoDB: Setting log file c:\iblogs\ib_logfile2 size to 31457280 InnoDB: Doublewrite buffer not found: creating new InnoDB: Doublewrite buffer created InnoDB: creating foreign key constraint system tables InnoDB: foreign key constraint system tables created 011024 10:58:25 InnoDB: Started
For further information about running MySQL on Windows, see section 2.6.2 Windows Notes.
Beginning with MySQL 4.0.11, you can install MySQL on Mac OS X 10.2
("Jaguar") using a Mac OS X PKG binary package instead of the
binary tarball distribution. Please note that older versions of Mac OS X
(e.g. 10.1.x) are not supported by this package.
The package is located inside a disk image (.dmg) file, that you
first need to mount by double-clicking its icon in the Finder. It should
then mount the image and display its contents.
NOTE: Before proceeding with the installation, please make sure that no other MySQL server is running.
Please shut down all running MySQL instances before continuing by either
using the MySQL Manager Application (on Mac OS X Server) or via
mysqladmin shutdown on the command line.
To actually install the MySQL PKG, double click on the package icon. This will launch the Mac OS Package Installer, which will guide you through the installation of MySQL.
The Mac OS X PKG of MySQL will install itself into
`/usr/local/mysql-<version>' and will also install a symbolic link
`/usr/local/mysql', pointing to the new location. If a directory named
`/usr/local/mysql' already exists, it will be renamed to
`/usr/local/mysql.bak' first. Additionally, it will install the mysql
grant tables by executing mysql_install_db after the installation.
The installation layout is similar to the one of the binary distribution, all MySQL binaries are located in directory `/usr/local/mysql/bin'. The MySQL socket will be put into `/etc/mysql.sock' by default. See section 2.2.7 Installation Layouts.
It requires a user account named mysql (which should exist by default
on Mac OS X 10.2 and up).
If you are running Mac OS X Server, you already have a version of MySQL installed:
This manual section covers the installation of the official MySQL Mac OS X PKG only. Make sure to read Apple's help about installing MySQL (Run the "Help View" application, select "Mac OS X Server" help, and do a search for "MySQL" and read the item entitled "Installing MySQL").
Especially note, that the pre-installed version of MySQL on Mac OS X Server
is being started with the command safe_mysqld instead of
mysqld_safe.
If you previously used Marc Liyanage's MySQL packages for Mac OS X from http://www.entropy.ch, you can simply follow the update instructions for packages using the binary installation layout as given on his pages.
If you are upgrading from Marc's version or from the Mac OS X Server version
of MySQL to the official MySQL PKG, you also need to convert the existing
MySQL privilege tables using the mysql_fix_privilege_tables script,
since some new security privileges have been added.
See section 2.5.2 Upgrading From Version 3.23 to 4.0.
After the installation, you can start up MySQL by running the following commands in a terminal window. Please note that you need to have administrator privileges to perform this task.
shell> cd /usr/local/mysql shell> sudo ./bin/mysqld_safe (Enter your password, if necessary) (Press CTRL+Z) shell> bg (Press CTRL+D to exit the shell)
You should now be able to connect to the MySQL server, e.g. by running `/usr/local/mysql/bin/mysql'.
If you installed MySQL for the first time, please remember to set a password for the MySQL root user!
This is done with the following two commands:
/usr/local/mysql/bin/mysqladmin -u root password <password> /usr/local/mysql/bin/mysqladmin -u root -h `hostname` password <password>
You might want to also add aliases to your shell's resource file to
access mysql and mysqladmin from the command-line:
alias mysql '/usr/local/mysql/bin/mysql' alias mysqladmin '/usr/local/mysql/bin/mysqladmin'
Alternatively, you could simply add /usr/local/mysql/bin to
your PATH environment variable, e.g. by adding the following
to `$HOME/.tcshrc':
setenv PATH ${PATH}:/usr/local/mysql/bin
To enable the automatic startup of MySQL on bootup, you can download Marc Liyanage's MySQL StartupItem from the following location:
http://www2.entropy.ch/download/mysql-startupitem.pkg.tar.gz
We plan to add a StartupItem to the official MySQL PKG in the near future.
Please note that installing a new MySQL PKG does not remove the directory of an older installation - unfortunately the Mac OS X Installer does not yet offer the functionality required to properly upgrade previously installed packages.
After you have copied over the MySQL database files from the previous version and have successfully started the new version, you should consider removing the old installation files to save up disk space. Additionally, you should also remove older versions of the Package Receipt directories located in `/Library/Receipts/mysql-<version>.pkg'.
As of version 4.0.11, the MySQL server is available for Novell NetWare in binary package form. In order to host MySQL, the NetWare server must meet these requirements:
The binary package for NetWare can be obtained at http://www.mysql.com/downloads/.
SERVER: mysql -u root shutdown
SERVER: SEARCH ADD SYS:MYSQL\BIN
mysql_install_db
at the server console.
mysqld_safe at the server console.
autoexec.ncf. For example, if your MySQL installation is in
`SYS:MYSQL' and you want MySQL to start automatically, you could
add these lines:
#Starts the MySQL 4.0.x database server SEARCH ADD SYS:MYSQL\BIN MYSQLD_SAFE
If there was an existing installation of MySQL on the server, be sure
to check for existing MySQL startup commands in autoexec.ncf,
and edit or delete them as necessary.
Check the MySQL homepage (http://www.mysql.com/) for information about the current version and for downloading instructions.
Our main mirror is located at http://mirrors.sunsite.dk/mysql/.
For a complete up-to-date list of MySQL web/download mirrors, see http://www.mysql.com/downloads/mirrors.html. There you will also find information about becoming a MySQL mirror site and how to report a bad or out-of-date mirror.
MD5 Checksums or GnuPGAfter you have downloaded the MySQL package that suits your needs and before you attempt to install it, you should make sure it is intact and has not been tampered with.
MySQL AB offers two means of integrity checking: MD5 checksums and
cryptographic signatures using GnuPG, the GNU Privacy Guard.
MD5 ChecksumAfter you have downloaded the package, you should check, if the MD5 checksum matches the one provided on the MySQL download pages. Each package has an individual checksum, that you can verify with the following command:
shell> md5sum <package>
Note, that not all operating systems support the md5sum command - on
some it is simply called md5, others do not ship it at all. On Linux,
it is part of the GNU Text Utilities package, which is available for
a wide range of platforms. You can download the source code from
http://www.gnu.org/software/textutils/ as well. If you have
OpenSSL installed, you can also use the command openssl md5
<package> instead. A DOS/Windows implementation of the md5 command
is available from http://www.fourmilab.ch/md5/.
Example:
shell> md5sum mysql-standard-4.0.10-gamma-pc-linux-i686.tar.gz
155836a7ed8c93aee6728a827a6aa153
mysql-standard-4.0.10-gamma-pc-linux-i686.tar.gz
You should check, if the resulting checksum matches the one printed on the download page right below the respective package.
Most mirror sites also offer a file named `MD5SUMS', which also includes the MD5 checksums for all files included in the `Downloads' directory. Please note however that it's very easy to modify this file and it's not a very reliable method. If in doubt, you should consult different mirror sites and compare the results.
GnuPG
A more reliable method of verifying the integrity of a package is using
cryptographic signatures. MySQL AB uses the GNU Privacy Guard
(GnuPG), an Open Source alternative to the very well-known
Pretty Good Privacy (PGP) by Phil Zimmermann.
See http://www.gnupg.org/ and http://www.openpgp.org/
for more information about OpenPGP/GnuPG and how to obtain
and install GnuPG on your system. Most Linux distributions already
ship with GnuPG installed by default.
Beginning with MySQL 4.0.10 (February 2003), MySQL AB has started signing
their downloadable packages with GnuPG. Cryptographic signatures are
a much more reliable method of verifying the integrity and authenticity of
a file.
To verify the signature for a specific package, you first need to obtain a copy of MySQL AB's public GPG build key build@mysql.com. You can either cut and paste it directly from here, or obtain it from http://www.keyserver.net/.
Key ID:
pub 1024D/5072E1F5 2003-02-03
MySQL Package signing key (www.mysql.com) <build@mysql.com>
Fingerprint: A4A9 4068 76FC BD3C 4567 70C8 8C71 8D3B 5072 E1F5
Public Key (ASCII-armored):
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1.0.6 (GNU/Linux)
Comment: For info see http://www.gnupg.org
mQGiBD4+owwRBAC14GIfUfCyEDSIePvEW3SAFUdJBtoQHH/nJKZyQT7h9bPlUWC3
RODjQReyCITRrdwyrKUGku2FmeVGwn2u2WmDMNABLnpprWPkBdCk96+OmSLN9brZ
fw2vOUgCmYv2hW0hyDHuvYlQA/BThQoADgj8AW6/0Lo7V1W9/8VuHP0gQwCgvzV3
BqOxRznNCRCRxAuAuVztHRcEAJooQK1+iSiunZMYD1WufeXfshc57S/+yeJkegNW
hxwR9pRWVArNYJdDRT+rf2RUe3vpquKNQU/hnEIUHJRQqYHo8gTxvxXNQc7fJYLV
K2HtkrPbP72vwsEKMYhhr0eKCbtLGfls9krjJ6sBgACyP/Vb7hiPwxh6rDZ7ITnE
kYpXBACmWpP8NJTkamEnPCia2ZoOHODANwpUkP43I7jsDmgtobZX9qnrAXw+uNDI
QJEXM6FSbi0LLtZciNlYsafwAPEOMDKpMqAK6IyisNtPvaLd8lH0bPAnWqcyefep
rv0sxxqUEMcM3o7wwgfN83POkDasDbs3pjwPhxvhz6//62zQJ7Q7TXlTUUwgUGFj
a2FnZSBzaWduaW5nIGtleSAod3d3Lm15c3FsLmNvbSkgPGJ1aWxkQG15c3FsLmNv
bT6IXQQTEQIAHQUCPj6jDAUJCWYBgAULBwoDBAMVAwIDFgIBAheAAAoJEIxxjTtQ
cuH1cY4AnilUwTXn8MatQOiG0a/bPxrvK/gCAJ4oinSNZRYTnblChwFaazt7PF3q
zIhMBBMRAgAMBQI+PqPRBYMJZgC7AAoJEElQ4SqycpHyJOEAn1mxHijft00bKXvu
cSo/pECUmppiAJ41M9MRVj5VcdH/KN/KjRtW6tHFPYhMBBMRAgAMBQI+QoIDBYMJ
YiKJAAoJELb1zU3GuiQ/lpEAoIhpp6BozKI8p6eaabzF5MlJH58pAKCu/ROofK8J
Eg2aLos+5zEYrB/LsrkCDQQ+PqMdEAgA7+GJfxbMdY4wslPnjH9rF4N2qfWsEN/l
xaZoJYc3a6M02WCnHl6ahT2/tBK2w1QI4YFteR47gCvtgb6O1JHffOo2HfLmRDRi
Rjd1DTCHqeyX7CHhcghj/dNRlW2Z0l5QFEcmV9U0Vhp3aFfWC4Ujfs3LU+hkAWzE
7zaD5cH9J7yv/6xuZVw411x0h4UqsTcWMu0iM1BzELqX1DY7LwoPEb/O9Rkbf4fm
Le11EzIaCa4PqARXQZc4dhSinMt6K3X4BrRsKTfozBu74F47D8Ilbf5vSYHbuE5p
/1oIDznkg/p8kW+3FxuWrycciqFTcNz215yyX39LXFnlLzKUb/F5GwADBQf+Lwqq
a8CGrRfsOAJxim63CHfty5mUc5rUSnTslGYEIOCR1BeQauyPZbPDsDD9MZ1ZaSaf
anFvwFG6Llx9xkU7tzq+vKLoWkm4u5xf3vn55VjnSd1aQ9eQnUcXiL4cnBGoTbOW
I39EcyzgslzBdC++MPjcQTcA7p6JUVsP6oAB3FQWg54tuUo0Ec8bsM8b3Ev42Lmu
QT5NdKHGwHsXTPtl0klk4bQk4OajHsiy1BMahpT27jWjJlMiJc+IWJ0mghkKHt92
6s/ymfdf5HkdQ1cyvsz5tryVI3Fx78XeSYfQvuuwqp2H139pXGEkg0n6KdUOetdZ
Whe70YGNPw1yjWJT1IhMBBgRAgAMBQI+PqMdBQkJZgGAAAoJEIxxjTtQcuH17p4A
n3r1QpVC9yhnW2cSAjq+kr72GX0eAJ4295kl6NxYEuFApmr1+0uUq/SlsQ==
=YJkx
-----END PGP PUBLIC KEY BLOCK-----
You can import this key into your public GPG keyring by using
gpg --import. See the GPG documentation for more info
on how to work with public keys.
After you have downloaded and imported the public build key, now download your desired MySQL package and the corresponding signature, which is also available from the download page. The signature has the file name extension `.asc'. For example, the signature for `mysql-standard-4.0.10-gamma-pc-linux-i686.tar.gz' would be `mysql-standard-4.0.10-gamma-pc-linux-i686.tar.gz.asc'. Make sure that both files are stored in the same directory and then run the following command to verify the signature for this file:
shell> gpg --verify <package>.asc
Example:
shell> gpg --verify mysql-standard-4.0.10-gamma-pc-linux-i686.tar.gz.asc
gpg: Warning: using insecure memory!
gpg: Signature made Mon 03 Feb 2003 08:50:39 PM MET using DSA key ID 5072E1F5
gpg: Good signature from
"MySQL Package signing key (www.mysql.com) <build@mysql.com>"
The "Good signature" message indicates that everything is all right.
For RPM packages, there is no separate signature - RPM packages
actually have a built-in GPG signature and MD5 checksum. You can
verify them by running the following command:
shell> rpm --checksig <package>.rpm Example: shell> rpm --checksig MySQL-server-4.0.10-0.i386.rpm MySQL-server-4.0.10-0.i386.rpm: md5 gpg OK
Note: If you are using RPM 4.1 and it complains about (GPG)
NOT OK (MISSING KEYS: GPG#5072e1f5) (even though you have imported it into
your GPG public keyring), you need to import the key into the RPM keyring
first. RPM 4.1 no longer uses your GPG keyring (and GPG itself), but
rather maintains its own keyring (because it's a system wide application and
the GPG public keyring is user-specific file). To import the MySQL public
key into the RPM keyring, please use the following command:
shell> rpm --import <pubkey> Example: shell> rpm --import mysql_pubkey.asc
In case you notice that the MD5 checksum or GPG signatures
do not match, first try to download the respective package one more time,
maybe from another mirror site. If you repeatedly can not successfully
verify the integrity of the package, please notify us about such incidents
including the full package name and the download site you have been using
at webmaster@mysql.com or build@mysql.com.
We use GNU Autoconf, so it is possible to port MySQL to all modern systems with working Posix threads and a C++ compiler. (To compile only the client code, a C++ compiler is required but not threads.) We use and develop the software ourselves primarily on Sun Solaris (Versions 2.5 - 2.7) and SuSE Linux Version 7.x.
Note that for many operating systems, the native thread support works only in the latest versions. MySQL has been reported to compile successfully on the following operating system/thread package combinations:
glibc 2.0.7+. See section 2.6.1 Linux Notes (All Linux Versions).
Note that not all platforms are suited equally well for running MySQL. How well a certain platform is suited for a high-load mission-critical MySQL server is determined by the following factors:
pthread_mutex_lock() is too anxious to yield CPU time, this will hurt
MySQL tremendously. If this issue is not taken care of, adding extra CPUs
will actually make MySQL slower.
Based on the preceding criteria, the best platforms for running MySQL at this point are x86 with SuSE Linux 7.1, 2.4 kernel, and ReiserFS (or any similar Linux distribution) and SPARC with Solaris 2.7 or 2.8. FreeBSD comes third, but we really hope it will join the top club once the thread library is improved. We also hope that at some point we will be able to include all other platforms on which MySQL compiles, runs okay, but not quite with the same level of stability and performance, into the top category. This will require some effort on our part in cooperation with the developers of the OS/library components MySQL depends upon. If you are interested in making one of those components better, are in a position to influence their development, and need more detailed instructions on what MySQL needs to run better, send an e-mail to internals@lists.mysql.com.
Please note that the preceding comparison is not to say that one OS is better or worse than the other in general. We are talking about choosing a particular OS for a dedicated purpose@-running MySQL, and compare platforms in that regard only. With this in mind, the result of this comparison would be different if we included more issues into it. And in some cases, the reason one OS is better than the other could simply be that we have put forth more effort into testing on and optimising for that particular platform. We are just stating our observations to help you decide on which platform to use MySQL on in your setup.
The first decision to make is whether you want to use the latest development release or the last production (stable) release:
The second decision to make is whether you want to use a source distribution or a binary distribution. In most cases you should probably use a binary distribution, if one exists for your platform, as this generally will be easier to install than a source distribution.
In the following cases you probably will be better off with a source installation:
MySQL
clients can connect to both MySQL versions.
The extended MySQL binary distribution is marked with the
-max suffix and is configured with the same options as
mysqld-max. See section 4.7.5 mysqld-max, An Extended mysqld Server.
If you want to use the MySQL-Max RPM, you must first
install the standard MySQL RPM.
mysqld with some extra features that are
not in the standard binary distributions. Here is a list of the most
common extra options that you may want to use:
--with-innodb (default for MySQL 4.0 and onwards)
--with-berkeley-db (not available on all platforms)
--with-raid
--with-libwrap
--with-named-z-libs (This is done for some of the binaries)
--with-debug[=full]
pgcc), or use compiler options that are better optimised for your
processor.
The MySQL naming scheme uses release numbers that consist of three
numbers and a suffix. For example, a release name like
mysql-3.21.17-beta is interpreted like this:
3) describes the file format. All Version 3
releases have the same file format.
21) is the release level. Normally there are two to
choose from. One is the production branch (currently 3.23) and the
other is the development branch (currently 4.0). Normally both are
stable, but the development version may have quirks, may be missing documentation on
new features, or may fail to compile on some systems.
17) is the version number within the
release level. This is incremented for each new distribution. Usually you
want the latest version for the release level you have chosen.
beta) indicates the stability level of the release.
The possible suffixes are:
alpha indicates that the release contains some large section of
new code that hasn't been 100% tested. Known bugs (usually there are none)
should be documented in the News section. See section D MySQL Change History. There are also new
commands and extensions in most alpha releases. Active development that
may involve major code changes can occur on an alpha release, but everything
will be tested before doing a release. There should be no known bugs in any
MySQL release.
beta means that all new code has been tested. No major new
features that could cause corruption on old code are added. There should
be no known bugs. A version changes from alpha to beta when there
haven't been any reported fatal bugs within an alpha version for at least
a month and we don't plan to add any features that could make any old command
more unreliable.
gamma is a beta that has been around a while and seems to work fine.
Only minor fixes are added. This is what many other companies call a release.
All versions of MySQL are run through our standard tests and benchmarks to ensure that they are relatively safe to use. Because the standard tests are extended over time to check for all previously found bugs, the test suite keeps getting better.
Note that all releases have been tested at least with:
crash-me test
Another test is that we use the newest MySQL version in our internal production environment, on at least one machine. We have more than 100 gigabytes of data to work with.
This section describes the default layout of the directories created by installing binary and source distributions.
A binary distribution is installed by unpacking it at the installation location you choose (typically `/usr/local/mysql') and creates the following directories in that location:
| Directory | Contents of directory |
| `bin' | Client programs and the mysqld server
|
| `data' | Log files, databases |
| `include' | Include (header) files |
| `lib' | Libraries |
| `scripts' | mysql_install_db
|
| `share/mysql' | Error message files |
| `sql-bench' | Benchmarks |
A source distribution is installed after you configure and compile it. By default, the installation step installs files under `/usr/local', in the following subdirectories:
| Directory | Contents of directory |
| `bin' | Client programs and scripts |
| `include/mysql' | Include (header) files |
| `info' | Documentation in Info format |
| `lib/mysql' | Libraries |
| `libexec' | The mysqld server
|
| `share/mysql' | Error message files |
| `sql-bench' | Benchmarks and crash-me test
|
| `var' | Databases and log files |
Within an installation directory, the layout of a source installation differs from that of a binary installation in the following ways:
mysqld server is installed in the `libexec'
directory rather than in the `bin' directory.
mysql_install_db is installed in the `/usr/local/bin' directory
rather than in `/usr/local/mysql/scripts'.
You can create your own binary installation from a compiled source distribution by executing the script `scripts/make_binary_distribution'.
MySQL is evolving quite rapidly here at MySQL AB and we want to share this with other MySQL users. We try to make a release when we have very useful features that others seem to have a need for.
We also try to help out users who request features that are easy to implement. We take note of what our licensed users want to have, and we especially take note of what our extended e-mail supported customers want and try to help them out.
No one has to download a new release. The News section will tell you if the new release has something you really want. See section D MySQL Change History.
We use the following policy when updating MySQL:
The current production release is Version 4.0; we have already moved active development to Version 4.1 and 5.0. Bugs will still be fixed in the 4.0 version, and critical bugs also in the 3.23 series. We don't believe in a complete freeze, as this also leaves out bug fixes and things that ``must be done.'' ``Somewhat frozen'' means that we may add small things that ``almost surely will not affect anything that's already working.''
MySQL uses a slightly different naming scheme from most other products. In general it's relatively safe to use any version that has been out for a couple of weeks without being replaced with a new version. See section 2.2.6 Which MySQL Version to Use.
We put a lot of time and effort into making our releases bug free. To our knowledge, we have not released a single MySQL version with any known 'fatal' repeatable bugs.
A fatal bug is something that crashes MySQL under normal usage, gives wrong answers for normal queries, or has a security problem.
We have documented all open problems, bugs and things that are dependent on design decisions. See section 1.8.6 Known Errors and Design Deficiencies in MySQL.
Our aim is to fix everything that is fixable, but without risking making a stable MySQL version less stable. In certain cases, this means we can fix an issue in the development version(s), but not in the stable (production) version. Naturally, we document such issues so that users are aware.
Here is a description of how our build process works:
'a' release
for that platform. Thanks to our large user base, problems are found
quickly.
As a service, we at MySQL AB provide a set of binary distributions of MySQL that are compiled at our site or at sites where customers kindly have given us access to their machines.
These distributions are generated using the script
Build-tools/Do-compile which compiles the source code and creates the
binary tar.gz archive using scripts/make_binary_distribution
These binaries are configured and built with the following compilers and
options.
Binaries built on MySQL AB development systems:
gcc 2.95.3
CFLAGS="-O2 -mcpu=pentiumpro" CXX=gcc CXXFLAGS="-O2 -mcpu=pentiumpro -felide-constructors" ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile --enable-assembler --disable-shared --with-client-ldflags=-all-static --with-mysqld-ldflags=-all-static
ecc (Intel C++ Itanium Compiler 7.0)
CC=ecc CFLAGS="-O2 -tpp2 -ip -nolib_inline" CXX=ecc CXXFLAGS="-O2 -tpp2 -ip -nolib_inline" ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile
ecc (Intel C++ Itanium Compiler 7.0)
CC=ecc CFLAGS=-tpp1 CXX=ecc CXXFLAGS=-tpp1 ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile
ccc (Compaq C V6.2-505 / Compaq C++ V6.3-006)
CC=ccc CFLAGS="-fast -arch generic" CXX=cxx CXXFLAGS="-fast -arch generic -noexceptions -nortti" ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile --with-mysqld-ldflags=-non_shared --with-client-ldflags=-non_shared --disable-shared
egcs 1.1.2
CC=gcc CFLAGS="-O3 -fno-omit-frame-pointer" CXX=gcc CXXFLAGS="-O3 -fno-omit-frame-pointer -felide-constructors -fno-exceptions -fno-rtti" ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile --enable-assembler --disable-shared
gcc 2.95.3
CFLAGS="-O2" CXX=gcc CXXFLAGS="-O2 -felide-constructors" ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile --disable-shared --with-client-ldflags=-all-static --with-mysqld-ldflags=-all-static
gcc 3.2.1
CXX=gcc ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile --disable-shared
gcc 3.2
CC=gcc CFLAGS="-O3 -fno-omit-frame-pointer" CXX=gcc CXXFLAGS="-O3 -fno-omit-frame-pointer -felide-constructors -fno-exceptions -fno-rtti" ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile --enable-assembler --with-named-z-libs=no --with-named-curses-libs=-lcurses --disable-shared
gcc 3.2
CC=gcc CFLAGS="-O3 -m64 -fno-omit-frame-pointer" CXX=gcc CXXFLAGS="-O3 -m64 -fno-omit-frame-pointer -felide-constructors -fno-exceptions -fno-rtti" ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile --enable-assembler --with-named-z-libs=no --with-named-curses-libs=-lcurses --disable-shared
gcc 2.95.3
CC=gcc CFLAGS="-O3 -fno-omit-frame-pointer" CXX=gcc CXXFLAGS="-O3 -fno-omit-frame-pointer -felide-constructors -fno-exceptions -fno-rtti" ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile --enable-assembler --with-named-curses-libs=-lcurses --disable-shared
cc-5.0 (Sun Forte 5.0)
CC=cc-5.0 CXX=CC ASFLAGS="-xarch=v9" CFLAGS="-Xa -xstrconst -mt -D_FORTEC_ -xarch=v9" CXXFLAGS="-noex -mt -D_FORTEC_ -xarch=v9" ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile --enable-assembler --with-named-z-libs=no --enable-thread-safe-client --disable-shared
gcc 3.2.1
CFLAGS="-O2 -mcpu=powerpc -Wa,-many " CXX=gcc CXXFLAGS="-O2 -mcpu=powerpc -Wa,-many -felide-constructors -fno-exceptions -fno-rtti" ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile --with-named-z-libs=no --disable-shared
gcc 3.2.1
CFLAGS="-O2 -mcpu=powerpc -Wa,-many" CXX=gcc CXXFLAGS="-O2 -mcpu=powerpc -Wa,-many -felide-constructors -fno-exceptions -fno-rtti" ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --with-server-suffix="-pro" --enable-thread-safe-client --enable-local-infile --with-named-z-libs=no --disable-shared
gcc 3.1
CFLAGS="-DHPUX -I/opt/dce/include -O3 -fPIC" CXX=gcc CXXFLAGS="-DHPUX -I/opt/dce /include -felide-constructors -fno-exceptions -fno-rtti -O3 -fPIC" ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile --with-pthread --with-named-thread-libs=-ldce --with-lib-ccflags=-fPIC --disable-shared
aCC (HP ANSI C++ B3910B A.03.33)
CC=cc CXX=aCC CFLAGS=+DD64 CXXFLAGS=+DD64 ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile --disable-shared
gcc 3.1
CC=gcc CFLAGS="-O3 -fno-omit-frame-pointer" CXX=gcc CXXFLAGS="-O3 -fno-omit-frame-pointer -felide-constructors -fno-exceptions -fno-rtti" ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile --disable-shared
gcc 2.95.4
CFLAGS=-DHAVE_BROKEN_REALPATH ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile --enable-assembler --with-named-z-libs=not-used --disable-shared
gcc 2.95.3qnx-nto 20010315
CC=gcc CFLAGS="-O3 -fno-omit-frame-pointer" CXX=gcc CXXFLAGS="-O3 -fno-omit-frame-pointer -felide-constructors -fno-exceptions -fno-rtti" ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile --disable-shared
The following binaries are built on third-party systems kindly provided to MySQL AB by other users. Please note that these are only provided as a courtesy. Since MySQL AB does not have full control over these systems, we can only provide limited support for the binaries built on these systems.
gcc 2.95.3
CFLAGS="-O3 -mpentium" LDFLAGS=-static CXX=gcc CXXFLAGS="-O3 -mpentium -felide-constructors" ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile --with-named-z-libs=no --enable-thread-safe-client --disable-shared
CC 3.2
CC=cc CFLAGS="-O" CXX=CC ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile --with-named-z-libs=no --enable-thread-safe-client --disable-shared
cc/cxx (Compaq C V6.3-029i / DIGITAL C++ V6.1-027)
CC="cc -pthread" CFLAGS="-O4 -ansi_alias -ansi_args -fast -inline speed -speculate all" CXX="cxx -pthread" CXXFLAGS="-O4 -ansi_alias -fast -inline speed -speculate all -noexceptions -nortti" ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile --with-prefix=/usr/local/mysql --with-named-thread-libs="-lpthread -lmach -lexc -lc" --disable-shared --with-mysqld-ldflags=-all-static
gcc 3.0.1
CC=gcc CFLAGS="-O3 -fno-omit-frame-pointer" CXXFLAGS="-O3 -fno-omit-frame-pointer -felide-constructors -fno-exceptions -fno-rtti" ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile --disable-shared
The following compile options have been used for binary packages MySQL AB used to provide in the past. These binaries are no longer being updated, but the compile options are kept here for reference purposes.
gcc 2.95.2
CFLAGS="-O3 -mpentiumpro" CXX=gcc CXXFLAGS="-O3 -mpentiumpro -felide-constructors -fno-exceptions -fno-rtti" ./configure --prefix=/usr/local/mysql --enable-assembler --with-mysqld-ldflags=-all-static --disable-shared --with-extra-charsets=complex
gcc 2.7.2.1
CC=gcc CXX=gcc CXXFLAGS="-O3 -felide-constructors" ./configure --prefix=/usr/local/mysql --disable-shared --with-extra-charsets=complex --enable-assembler
egcs 1.0.3a or 2.90.27 or gcc 2.95.2 and newer
CC=gcc CFLAGS="-O3" CXX=gcc CXXFLAGS="-O3 -felide-constructors -fno-exceptions -fno-rtti" ./configure --prefix=/usr/local/mysql --with-low-memory --with-extra-charsets=complex --enable-assembler
gcc 2.8.1
CC=gcc CXX=gcc CXXFLAGS=-O3 ./configure --prefix=/usr/local/mysql --with-low-memory --with-extra-charsets=complex
gcc 2.7.2.1
CC=gcc CXX=gcc CXXFLAGS=-O ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex
gcc 2.7.2
CC=gcc CXX=gcc CXXFLAGS=-O3 ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex
gcc 2.7.2.2
CC=gcc CXX=gcc CXXFLAGS=-O3 ./configure --prefix=/usr/local/mysql --with-extra-charsets=complex
Anyone who has more optimal options for any of the preceding configurations listed can always mail them to the developer's mailing list at internals@lists.mysql.com.
RPM distributions prior to MySQL Version 3.22 are user-contributed. Beginning with Version 3.22, the RPMs are generated by us at MySQL AB.
If you want to compile a debug version of MySQL, you should add
--with-debug or --with-debug=full to the preceding configure lines
and remove any -fomit-frame-pointer options.
For the Windows distribution, please see section 2.1.2 Installing MySQL on Windows.
See also section 2.1.2.1 Installing the Binaries, section 2.1.1 Installing MySQL on Linux, and section 8.1.13 Building Client Programs.
You need the following tools to install a MySQL binary distribution:
gunzip to uncompress the distribution.
tar to unpack the distribution. GNU tar is
known to work. Sun tar is known to have problems.
An alternative installation method under Linux is to use RPM-based (RPM Package Manager) distributions. See section 2.1.1 Installing MySQL on Linux.
If you run into problems, please always use mysqlbug when
posting questions to mysql@lists.mysql.com. Even if the problem
isn't a bug, mysqlbug gathers system information that will help others
solve your problem. By not using mysqlbug, you lessen the likelihood
of getting a solution to your problem. You will find mysqlbug in the
`bin' directory after you unpack the distribution. See section 1.7.1.3 How to Report Bugs or Problems.
The basic commands you must execute to install and use a MySQL binary distribution are:
shell> groupadd mysql shell> useradd -g mysql mysql shell> cd /usr/local shell> gunzip < /path/to/mysql-VERSION-OS.tar.gz | tar xvf - shell> ln -s full-path-to-mysql-VERSION-OS mysql shell> cd mysql shell> scripts/mysql_install_db shell> chown -R root . shell> chown -R mysql data shell> chgrp -R mysql . shell> bin/safe_mysqld --user=mysql & or shell> bin/mysqld_safe --user=mysql & if you are running MySQL 4.x
You can add new users using the bin/mysql_setpermission script if
you install the DBI and DBD-mysql Perl modules.
A more detailed description follows.
To install a binary distribution, follow these steps, then proceed to section 2.4 Post-installation Setup and Testing, for post-installation setup and testing:
root.)
tar
archives and have names like `mysql-VERSION-OS.tar.gz', where
VERSION is a number (for example, 3.21.15), and OS
indicates the type of operating system for which the distribution is intended
(for example, pc-linux-gnu-i586).
-max suffix, this
means that the binary has support for transaction-safe tables and other
features. See section 4.7.5 mysqld-max, An Extended mysqld Server. Note that all binaries
are built from the same MySQL source distribution.
mysqld to run as:
shell> groupadd mysql shell> useradd -g mysql mysqlThese commands add the
mysql group and the mysql user. The
syntax for useradd and groupadd may differ slightly on different
versions of Unix. They may also be called adduser and addgroup.
You may wish to call the user and group something else instead of mysql.
shell> cd /usr/local
shell> gunzip < /path/to/mysql-VERSION-OS.tar.gz | tar xvf - shell> ln -s full-path-to-mysql-VERSION-OS mysqlThe first command creates a directory named `mysql-VERSION-OS'. The second command makes a symbolic link to that directory. This lets you refer more easily to the installation directory as `/usr/local/mysql'.
shell> cd mysqlYou will find several files and subdirectories in the
mysql directory.
The most important for installation purposes are the `bin' and
`scripts' subdirectories.
PATH environment variable so that your shell finds the MySQL
programs properly. See section F Environment Variables.
mysql_install_db script used to initialise
the mysql database containing the grant tables that store the server
access permissions.
mysqlaccess and have the MySQL
distribution in some non-standard place, you must change the location where
mysqlaccess expects to find the mysql client. Edit the
`bin/mysqlaccess' script at approximately line 18. Search for a line
that looks like this:
$MYSQL = '/usr/local/bin/mysql'; # path to mysql executableChange the path to reflect the location where
mysql actually is
stored on your system. If you do not do this, you will get a Broken
pipe error when you run mysqlaccess.
shell> scripts/mysql_install_dbNote that MySQL versions older than Version 3.22.10 started the MySQL server when you run
mysql_install_db. This is no
longer true.
root and ownership of the data
directory to the user that you will run mysqld as:
shell> chown -R root /usr/local/mysql/. shell> chown -R mysql /usr/local/mysql/data shell> chgrp -R mysql /usr/local/mysql/.The first command changes the
owner attribute of the files to the
root user, the second one changes the owner attribute of the
data directory to the mysql user, and the third one changes the
group attribute to the mysql group.
DBI/DBD interface,
see section 2.7 Perl Installation Comments.
support-files/mysql.server to the location where
your system has its startup files. More information can be found in the
support-files/mysql.server script itself and in
section 2.4.3 Starting and Stopping MySQL Automatically.
After everything has been unpacked and installed, you should initialise and test your distribution.
You can start the MySQL server with the following command:
shell> bin/safe_mysqld --user=mysql &
Now proceed to section 4.7.2 safe_mysqld, The Wrapper Around mysqld, and
See section 2.4 Post-installation Setup and Testing.
Before you proceed with the source installation, check first to see if our binary is available for your platform and if it will work for you. We put a lot of effort into making sure that our binaries are built with the best possible options.
You need the following tools to build and install MySQL from source:
gunzip to uncompress the distribution.
tar to unpack the distribution. GNU tar is
known to work. Sun tar is known to have problems.
gcc >= 2.95.2, egcs >= 1.0.2
or egcs 2.91.66, SGI C++, and SunPro C++ are some of the
compilers that are known to work. libg++ is not needed when
using gcc. gcc 2.7.x has a bug that makes it impossible
to compile some perfectly legal C++ files, such as
`sql/sql_base.cc'. If you only have gcc 2.7.x, you must
upgrade your gcc to be able to compile MySQL. gcc
2.8.1 is also known to have problems on some platforms, so it should be
avoided if a new compiler exists for the platform.
gcc >= 2.95.2 is recommended when compiling MySQL
Version 3.23.x.
make program. GNU make is always recommended and is
sometimes required. If you have problems, we recommend trying GNU
make 3.75 or newer.
If you are using a recent version of gcc, recent enough to understand the
-fno-exceptions option, it is very important that you use
it. Otherwise, you may compile a binary that crashes randomly. We also
recommend that you use -felide-constructors and -fno-rtti along
with -fno-exceptions. When in doubt, do the following:
CFLAGS="-O3" CXX=gcc CXXFLAGS="-O3 -felide-constructors -fno-exceptions \
-fno-rtti" ./configure --prefix=/usr/local/mysql --enable-assembler \
--with-mysqld-ldflags=-all-static
On most systems this will give you a fast and stable binary.
If you run into problems, please always use mysqlbug when
posting questions to mysql@lists.mysql.com. Even if the problem
isn't a bug, mysqlbug gathers system information that will help others
solve your problem. By not using mysqlbug, you lessen the likelihood
of getting a solution to your problem. You will find mysqlbug in the
`scripts' directory after you unpack the distribution.
See section 1.7.1.3 How to Report Bugs or Problems.
The basic commands you must execute to install a MySQL source distribution are:
shell> groupadd mysql shell> useradd -g mysql mysql shell> gunzip < mysql-VERSION.tar.gz | tar -xvf - shell> cd mysql-VERSION shell> ./configure --prefix=/usr/local/mysql shell> make shell> make install shell> scripts/mysql_install_db shell> chown -R root /usr/local/mysql shell> chown -R mysql /usr/local/mysql/var shell> chgrp -R mysql /usr/local/mysql shell> cp support-files/my-medium.cnf /etc/my.cnf shell> /usr/local/mysql/bin/safe_mysqld --user=mysql & or shell> /usr/local/mysql/bin/mysqld_safe --user=mysql & if you are running MySQL 4.x.
If you want to have support for InnoDB tables, you should edit the
/etc/my.cnf file and remove the # character before the
parameter that starts with innodb_....
See section 4.1.2 `my.cnf' Option Files, and section 7.5.3 InnoDB Startup Options.
If you start from a source RPM, do the following:
shell> rpm --rebuild --clean MySQL-VERSION.src.rpm
This will make a binary RPM that you can install.
You can add new users using the bin/mysql_setpermission script if
you install the DBI and DBD-mysql Perl modules.
A more detailed description follows.
To install a source distribution, follow these steps, then proceed to section 2.4 Post-installation Setup and Testing, for post-installation initialisation and testing:
BDB or BerkeleyDB Tables.
MySQL source distributions are provided as compressed tar
archives and have names like `mysql-VERSION.tar.gz', where
VERSION is a number like 3.23.57.
mysqld to run as:
shell> groupadd mysql shell> useradd -g mysql mysqlThese commands add the
mysql group and the mysql user. The
syntax for useradd and groupadd may differ slightly on different
versions of Unix. They may also be called adduser and addgroup.
You may wish to call the user and group something else instead of mysql.
shell> gunzip < /path/to/mysql-VERSION.tar.gz | tar xvf -This command creates a directory named `mysql-VERSION'.
shell> cd mysql-VERSIONNote that currently you must configure and build MySQL from this top-level directory. You cannot build it in a different directory.
shell> ./configure --prefix=/usr/local/mysql shell> makeWhen you run
configure, you might want to specify some options.
Run ./configure --help for a list of options.
section 2.3.3 Typical configure Options, discusses some of the
more useful options.
If configure fails, and you are going to send mail to
mysql@lists.mysql.com to ask for assistance, please include any
lines from `config.log' that you think can help solve the problem. Also
include the last couple of lines of output from configure if
configure aborts. Post the bug report using the mysqlbug
script. See section 1.7.1.3 How to Report Bugs or Problems.
If the compile fails, see section 2.3.5 Problems Compiling MySQL?, for help with
a number of common problems.
shell> make installYou might need to run this command as
root.
shell> scripts/mysql_install_dbNote that MySQL versions older than Version 3.22.10 started the MySQL server when you run
mysql_install_db. This is no
longer true.
root and ownership of the data
directory to the user that you will run mysqld as:
shell> chown -R root /usr/local/mysql shell> chown -R mysql /usr/local/mysql/var shell> chgrp -R mysql /usr/local/mysqlThe first command changes the
owner attribute of the files to the
root user, the second one changes the owner attribute of the
data directory to the mysql user, and the third one changes the
group attribute to the mysql group.
DBI/DBD interface,
see section 2.7 Perl Installation Comments.
support-files/mysql.server to the location where
your system has its startup files. More information can be found in the
support-files/mysql.server script itself and in
section 2.4.3 Starting and Stopping MySQL Automatically.
After everything has been installed, you should initialise and test your distribution:
shell> /usr/local/mysql/bin/safe_mysqld --user=mysql &
If that command fails immediately with mysqld daemon ended, you can
find some information in the file `mysql-data-directory/'hostname'.err'.
The likely reason is that you already have another mysqld server
running. See section 4.1.4 Running Multiple MySQL Servers on the Same Machine.
Now proceed to section 2.4 Post-installation Setup and Testing.
Sometimes patches appear on the mailing list or are placed in the patches area of the MySQL web site (http://www.mysql.com/downloads/patches.html).
To apply a patch from the mailing list, save the message in which the patch appears in a file, change into the top-level directory of your MySQL source tree, and run these commands:
shell> patch -p1 < patch-file-name shell> rm config.cache shell> make clean
Patches from the FTP site are distributed as plain text files or as files
compressed with gzip. Apply a plain patch as shown
previously for
mailing list patches. To apply a compressed patch, change into the
top-level directory of your MySQL source tree and run these
commands:
shell> gunzip < patch-file-name.gz | patch -p1 shell> rm config.cache shell> make clean
After applying a patch, follow the instructions for a normal source install,
beginning with the ./configure step. After running the make
install step, restart your MySQL server.
You may need to bring down any currently running server before you run
make install. (Use mysqladmin shutdown to do this.) Some
systems do not allow you to install a new version of a program if it replaces
the version that is currently executing.
configure Options
The configure script gives you a great deal of control over how
you configure your MySQL distribution. Typically you do this
using options on the configure command-line. You can also affect
configure using certain environment variables. See section F Environment Variables. For a list of options supported by configure, run
this command:
shell> ./configure --help
Some of the more commonly-used configure options are described here:
--without-server option:
shell> ./configure --without-serverIf you don't have a C++ compiler,
mysql will not compile (it is the
one client program that requires C++). In this case,
you can remove the code in configure that tests for the C++ compiler
and then run ./configure with the --without-server option. The
compile step will still try to build mysql, but you can ignore any
warnings about `mysql.cc'. (If make stops, try make -k
to tell it to continue with the rest of the build even if errors occur.)
libmysqld.a) you should
use the --with-embedded-server option.
configure command, something like one
of these:
shell> ./configure --prefix=/usr/local/mysql
shell> ./configure --prefix=/usr/local \
--localstatedir=/usr/local/mysql/data
The first command changes the installation prefix so that everything is
installed under `/usr/local/mysql' rather than the default of
`/usr/local'. The second command preserves the default installation
prefix, but overrides the default location for database directories
(normally `/usr/local/var') and changes it to
/usr/local/mysql/data. After you have compiled MySQL, you can
change these options with option files. See section 4.1.2 `my.cnf' Option Files.
configure command like this:
shell> ./configure --with-unix-socket-path=/usr/local/mysql/tmp/mysql.sockNote that the given file must be an absolute pathname. You can also later change the location `mysql.sock' by using the MySQL option files. See section A.4.5 How to Protect or Change the MySQL Socket File `/tmp/mysql.sock'.
configure like this:
shell> ./configure --with-client-ldflags=-all-static \
--with-mysqld-ldflags=-all-static
gcc and don't have libg++ or libstdc++
installed, you can tell configure to use gcc as your C++
compiler:
shell> CC=gcc CXX=gcc ./configureWhen you use
gcc as your C++ compiler, it will not attempt to link in
libg++ or libstdc++. This may be a good idea to do even if you
have the above libraries installed, as some versions of these libraries have
caused strange problems for MySQL users in the past.
Here are some common environment variables to set depending on
the compiler you are using:
| Compiler | Recommended options |
| gcc 2.7.2.1 | CC=gcc CXX=gcc CXXFLAGS="-O3 -felide-constructors" |
| egcs 1.0.3a | CC=gcc CXX=gcc CXXFLAGS="-O3 -felide-constructors -fno-exceptions -fno-rtti" |
| gcc 2.95.2 | CFLAGS="-O3 -mpentiumpro" CXX=gcc CXXFLAGS="-O3 -mpentiumpro \ -felide-constructors -fno-exceptions -fno-rtti" |
| pgcc 2.90.29 or newer | CFLAGS="-O3 -mpentiumpro -mstack-align-double" CXX=gcc \ CXXFLAGS="-O3 -mpentiumpro -mstack-align-double -felide-constructors \ -fno-exceptions -fno-rtti" |
--prefix=/usr/local/mysql --enable-assembler \ --with-mysqld-ldflags=-all-staticThe full configure line would, in other words, be something like the following for all recent gcc versions:
CFLAGS="-O3 -mpentiumpro" CXX=gcc CXXFLAGS="-O3 -mpentiumpro \ -felide-constructors -fno-exceptions -fno-rtti" ./configure \ --prefix=/usr/local/mysql --enable-assembler \ --with-mysqld-ldflags=-all-staticThe binaries we provide on the MySQL web site at http://www.mysql.com/ are all compiled with full optimisation and should be perfect for most users. See section 2.2.10 MySQL Binaries Compiled by MySQL AB. There are some things you can tweak to make an even faster binary, but this is only for advanced users. See section 5.5.3 How Compiling and Linking Affects the Speed of MySQL. If the build fails and produces errors about your compiler or linker not being able to create the shared library `libmysqlclient.so.#' (`#' is a version number), you can work around this problem by giving the
--disable-shared option to configure. In this case,
configure will not build a shared `libmysqlclient.so.#' library.
DEFAULT column values for
non-NULL columns (that is, columns that are not allowed to be
NULL). See section 1.8.5.2 Constraint NOT NULL and DEFAULT values.
shell> CXXFLAGS=-DDONT_USE_DEFAULT_FIELDS ./configure
--with-charset option:
shell> ./configure --with-charset=CHARSET
CHARSET may be one of big5, cp1251, cp1257,
czech, danish, dec8, dos, euc_kr,
gb2312, gbk, german1, hebrew, hp8,
hungarian, koi8_ru, koi8_ukr, latin1,
latin2, sjis, swe7, tis620, ujis,
usa7, or win1251ukr.
See section 4.6.1 The Character Set Used for Data and Sorting.
If you want to convert characters between the server and the client,
you should take a look at the SET CHARACTER SET command.
See section 5.5.6 SET Syntax.
Warning: If you change character sets after having created any
tables, you will have to run myisamchk -r -q --set-character-set=charset
on every table. Your
indexes may be sorted incorrectly otherwise. (This can happen if you
install MySQL, create some tables, then reconfigure
MySQL to use a different character set and reinstall it.)
With the option --with-extra-charsets=LIST you can define
which additional character sets should be compiled into the server.
Here LIST is either a list of character
sets separated with spaces,
complex to include all characters that can't be dynamically loaded,
or all to include all character sets into the binaries.
--with-debug
option:
shell> ./configure --with-debugThis causes a safe memory allocator to be included that can find some errors and that provides output about what is happening. See section E.1 Debugging a MySQL server.
--enable-thread-safe-client configure options. This will create a
libmysqlclient_r library with which you should link your threaded
applications. See section 8.1.14 How to Make a Threaded Client.
Caution: You should read this section only if you are interested in helping us test our new code. If you just want to get MySQL up and running on your system, you should use a standard release distribution (either a source or binary distribution will do).
To obtain our most recent development source tree, use these instructions:
BitKeeper from
http://www.bitmover.com/cgi-bin/download.cgi. You will need
Bitkeeper 3.0 or newer to access our repository.
BitKeeper is installed, first go to the directory you
want to work from, and then use one of the following commands to clone
the MySQL version branch of your choice:
To clone the 3.23 (old) branch, use this command:
shell> bk clone bk://mysql.bkbits.net/mysql-3.23 mysql-3.23To clone the 4.0 (stable/production) branch, use this command:
shell> bk clone bk://mysql.bkbits.net/mysql-4.0 mysql-4.0To clone the 4.1 alpha branch, use this command:
shell> bk clone bk://mysql.bkbits.net/mysql-4.1 mysql-4.1To clone the 5.0 development branch, use this command:
shell> bk clone bk://mysql.bkbits.net/mysql-5.0 mysql-5.0In the preceding examples the source tree will be set up in the `mysql-3.23/', `mysql-4.0/', `mysql-4.1/', or `mysql-5.0/' subdirectory of your current directory. If you are behind a firewall and can only initiate HTTP connections, you can also use
BitKeeper via HTTP.
If you are required to use a proxy server, simply set the environment
variable http_proxy to point to your proxy:
shell> export http_proxy="http://your.proxy.server:8080/"Now, simply replace the
bk:// with http:// when doing
a clone. Example:
shell> bk clone http://mysql.bkbits.net/mysql-4.1 mysql-4.1The initial download of the source tree may take a while, depending on the speed of your connection - please be patient.
make, autoconf 2.53 (or newer),
automake 1.5, libtool 1.4, and m4 to run the next
set of commands. Note that automake 1.7 or newer doesn't yet work.
If you are using trying to configure MySQL 4.1 you will also need
bison 1.75. Older versions of bison may report this error:
sql_yacc.yy:#####: fatal error: maximum table size (32767)
exceeded. Note: the maximum table size is not actually exceeded,
the error is caused by bugs in these earlier bison versions.
The typical command to do in a shell is:
cd mysql-4.0 bk -r get -Sq aclocal; autoheader; autoconf; automake (cd innobase ; aclocal; autoheader; autoconf; automake) # for InnoDB (cd bdb/dist ; sh s_all ) # for Berkeley DB ./configure # Add your favorite options here makeIf you get some strange error during this stage, check that you really have
libtool installed.
A collection of our standard configure scripts is located in the
`BUILD/' subdirectory. If you are lazy, you can use
`BUILD/compile-pentium-debug'. To compile on a different architecture,
modify the script by removing flags that are Pentium-specific.
make install. Be careful with this
on a production machine; the command may overwrite your live release
installation. If you have another installation of MySQL, we
recommend that you run ./configure with different values for the
prefix, with-tcp-port, and unix-socket-path options than
those used for your production server.
make test. See section 10.1.2 MySQL Test Suite.
make stage and the distribution does
not compile, please report it in our bugs database at
http://bugs.mysql.com/. If you
have installed the latest versions of the required GNU tools, and they
crash trying to process our configuration files, please report that also.
However, if you execute aclocal and get a command not found
error or a similar problem, do not report it. Instead, make sure all
the necessary tools are installed and that your PATH variable is
set correctly so that your shell can find them.
bk clone operation to get the source tree, you
should run bk pull periodically to get the updates.
bk sccstool. If you see some funny diffs or code that you have a
question about, do not hesitate to send e-mail to
internals@lists.mysql.com. Also, if you think you have a better idea
on how to do something, send an e-mail to the same address with a patch.
bk diffs will produce a patch for you after you have made changes
to the source. If you do not have the time to code your idea, just send
a description.
BitKeeper has a nice help utility that you can access via
bk helptool.
bk ci or bk citool) will
trigger the posting of a message with the changeset to our internals
mailing list, as well as the usual openlogging.org submission with
just the changeset comments.
Generally, you wouldn't need to use commit (since the public tree will
not allow bk push), but rather use the bk diffs method
described previously.
You can also browse changesets, comments and sourcecode online by browsing to e.g. http://mysql.bkbits.net:8080/mysql-4.1 For MySQL 4.1.
The manual is in a separate tree which can be cloned with:
shell> bk clone bk://mysql.bkbits.net/mysqldoc mysqldoc
All MySQL programs compile cleanly for us with no warnings on
Solaris or Linux using gcc. On other systems, warnings may occur due to
differences in system include files. See section 2.3.6 MIT-pthreads Notes for warnings
that may occur when using MIT-pthreads. For other problems, check
the following list.
The solution to many problems involves reconfiguring. If you do need to reconfigure, take note of the following:
configure is run after it already has been run, it may use
information that was gathered during its previous invocation. This
information is stored in `config.cache'. When configure starts
up, it looks for that file and reads its contents if it exists, on the
assumption that the information is still correct. That assumption is invalid
when you reconfigure.
configure, you must run make again
to recompile. However, you may want to remove old object files from previous
builds first because they were compiled using different configuration options.
To prevent old configuration information or object files from being used,
run these commands before rerunning configure:
shell> rm config.cache shell> make clean
Alternatively, you can run make distclean.
The following list describes some of the problems when compiling MySQL that have been found to occur most often:
Internal compiler error: program cc1plus got fatal signal 11 or Out of virtual memory or Virtual memory exhaustedThe problem is that
gcc requires huge amounts of memory to compile
`sql_yacc.cc' with inline functions. Try running configure with
the --with-low-memory option:
shell> ./configure --with-low-memoryThis option causes
-fno-inline to be added to the compile line if you
are using gcc and -O0 if you are using something else. You
should try the --with-low-memory option even if you have so much
memory and swap space that you think you can't possibly have run out. This
problem has been observed to occur even on systems with generous hardware
configurations, and the --with-low-memory option usually fixes it.
configure picks c++ as the compiler name and
GNU c++ links with -lg++. If you are using gcc,
that behaviour can cause problems during configuration such as this:
configure: error: installation or configuration problem: C++ compiler cannot create executables.You might also observe problems during compilation related to
g++, libg++, or libstdc++.
One cause of these problems is that you may not have g++, or you may
have g++ but not libg++, or libstdc++. Take a look at
the `config.log' file. It should contain the exact reason why your c++
compiler didn't work. To work around these problems, you can use gcc
as your C++ compiler. Try setting the environment variable CXX to
"gcc -O3". For example:
shell> CXX="gcc -O3" ./configureThis works because
gcc compiles C++ sources as well as g++
does, but does not link in libg++ or libstdc++ by default.
Another way to fix these problems, of course, is to install g++,
libg++, and libstdc++. We would however like to recommend
you to not use libg++ or libstdc++ with MySQL as this will
only increase the binary size of mysqld without giving you any benefits.
Some versions of these libraries have also caused strange problems for
MySQL users in the past.
make to GNU make:
making all in mit-pthreads make: Fatal error in reader: Makefile, line 18: Badly formed macro assignment or make: file `Makefile' line 18: Must be a separator (: or pthread.h: No such file or directorySolaris and FreeBSD are known to have troublesome
make programs.
GNU make Version 3.75 is known to work.
CFLAGS and CXXFLAGS environment
variables. You can also specify the compiler names this way using CC
and CXX. For example:
shell> CC=gcc shell> CFLAGS=-O3 shell> CXX=gcc shell> CXXFLAGS=-O3 shell> export CC CFLAGS CXX CXXFLAGSSee section 2.2.10 MySQL Binaries Compiled by MySQL AB, for a list of flag definitions that have been found to be useful on various systems.
gcc compiler:
client/libmysql.c:273: parse error before `__attribute__'
gcc 2.8.1 is known to work, but we recommend using gcc 2.95.2 or
egcs 1.0.3a instead.
mysqld,
configure didn't correctly detect the type of the last argument to
accept(), getsockname(), or getpeername():
cxx: Error: mysqld.cc, line 645: In this statement, the referenced
type of the pointer value ''length'' is ''unsigned long'', which
is not compatible with ''int''.
new_sock = accept(sock, (struct sockaddr *)&cAddr, &length);
To fix this, edit the `config.h' file (which is generated by
configure). Look for these lines:
/* Define as the base type of the last arg to accept */ #define SOCKET_SIZE_TYPE XXXChange
XXX to size_t or int, depending on your
operating system. (Note that you will have to do this each time you run
configure because configure regenerates `config.h'.)
"sql_yacc.yy", line xxx fatal: default action causes potential...This is a sign that your version of
yacc is deficient.
You probably need to install bison (the GNU version of yacc)
and use that instead.
mysqld or a MySQL client, run
configure with the --with-debug option, then recompile and
link your clients with the new client library. See section E.2 Debugging a MySQL client.
This section describes some of the issues involved in using MIT-pthreads.
Note that on Linux you should not use MIT-pthreads but use the installed LinuxThreads implementation instead. See section 2.6.1 Linux Notes (All Linux Versions).
If your system does not provide native thread support, you will need to build MySQL using the MIT-pthreads package. This includes older FreeBSD systems, SunOS 4.x, Solaris 2.4 and earlier, and some others. See section 2.2.5 Operating Systems Supported by MySQL.
Note, that beginning with MySQL 4.0.2 MIT-pthreads are no longer part of the source distribution. If you require this package, you need to download it separately from http://www.mysql.com/Downloads/Contrib/pthreads-1_60_beta6-mysql.tar.gz
After downloading, extract this source archive into the top level of the
MySQL source directory. It will create a new subdirectory
mit-pthreads.
configure with the --with-mit-threads option:
shell> ./configure --with-mit-threadsBuilding in a non-source directory is not supported when using MIT-pthreads because we want to minimise our changes to this code.
--without-server
to build only the client code, clients will not know whether
MIT-pthreads is being used and will use Unix socket connections by default.
Because Unix sockets do not work under MIT-pthreads on some platforms, this
means you will need to use -h or --host when you run client
programs.
--external-locking option. This is only
needed if you want to be able to run two MySQL servers against the same
data files (not recommended).
bind() command fails to bind to a socket without
any error message (at least on Solaris). The result is that all connections
to the server fail. For example:
shell> mysqladmin version mysqladmin: connect to server at '' failed; error: 'Can't connect to mysql server on localhost (146)'The solution to this is to kill the
mysqld server and restart it.
This has only happened to us when we have forced the server down and done
a restart immediately.
sleep() system call isn't interruptible with
SIGINT (break). This is only noticeable when you run
mysqladmin --sleep. You must wait for the sleep() call to
terminate before the interrupt is served and the process stops.
ld: warning: symbol `_iob' has differing sizes:
(file /my/local/pthreads/lib/libpthread.a(findfp.o) value=0x4;
file /usr/lib/libc.so value=0x140);
/my/local/pthreads/lib/libpthread.a(findfp.o) definition taken
ld: warning: symbol `__iob' has differing sizes:
(file /my/local/pthreads/lib/libpthread.a(findfp.o) value=0x4;
file /usr/lib/libc.so value=0x140);
/my/local/pthreads/lib/libpthread.a(findfp.o) definition taken
implicit declaration of function `int strtoll(...)' implicit declaration of function `int strtoul(...)'
readline to work with MIT-pthreads. (This isn't
needed, but may be interesting for someone.)
You will need the following:
Building MySQL
File menu, select Open Workspace.
Build menu,
select the Set Active Configuration menu.
mysqld - Win32 Debug
and click OK.
F7 to begin the build of the debug server, libs, and
some client applications.
Set up and start the server in the same way as for the binary Windows distribution. See section 2.1.2.2 Preparing the Windows MySQL Environment.
Once you've installed MySQL (from either a binary or source distribution), you need to initialise the grant tables, start the server, and make sure that the server works okay. You may also wish to arrange for the server to be started and stopped automatically when your system starts up and shuts down.
Normally you install the grant tables and start the server like this for installation from a source distribution:
shell> ./scripts/mysql_install_db shell> cd mysql_installation_directory shell> ./bin/safe_mysqld --user=mysql &
For a binary distribution (not RPM or pkg packages), do this:
shell> cd mysql_installation_directory shell> ./scripts/mysql_install_db shell> ./bin/safe_mysqld --user=mysql & or shell> ./bin/mysqld_safe --user=mysql & if you are running MySQL 4.x.
This creates the mysql database which will hold all database
privileges, the test database which you can use to test
MySQL, and also privilege entries for the user that runs
mysql_install_db and a root user (without any passwords).
This also starts the mysqld server.
mysql_install_db will not overwrite any old privilege tables, so
it should be safe to run in any circumstances. If you don't want to
have the test database you can remove it with mysqladmin -u
root drop test.
Testing is most easily done from the top-level directory of the MySQL distribution. For a binary distribution, this is your installation directory (typically something like `/usr/local/mysql'). For a source distribution, this is the main directory of your MySQL source tree.
In the commands shown in this section and in the following
subsections, BINDIR is the path to the location in which programs
like mysqladmin and safe_mysqld are installed. For a
binary distribution, this is the `bin' directory within the
distribution. For a source distribution, BINDIR is probably
`/usr/local/bin', unless you specified an installation directory
other than `/usr/local' when you ran configure.
EXECDIR is the location in which the mysqld server is
installed. For a binary distribution, this is the same as
BINDIR. For a source distribution, EXECDIR is probably
`/usr/local/libexec'.
Testing is described in detail:
mysqld server and set up the initial
MySQL grant tables containing the privileges that determine how
users are allowed to connect to the server. This is normally done with the
mysql_install_db script:
shell> scripts/mysql_install_dbTypically,
mysql_install_db needs to be run only the first time you
install MySQL. Therefore, if you are upgrading an existing
installation, you can skip this step. (However, mysql_install_db is
quite safe to use and will not update any tables that already exist, so if
you are unsure of what to do, you can always run mysql_install_db.)
mysql_install_db creates six tables (user, db,
host, tables_priv, columns_priv, and func) in the
mysql database. A description of the initial privileges is given in
section 4.3.4 Setting Up the Initial MySQL Privileges. Briefly, these privileges allow the MySQL
root user to do anything, and allow anybody to create or use databases
with a name of test or starting with test_.
If you don't set up the grant tables, the following error will appear in the
log file when you start the server:
mysqld: Can't find file: 'host.frm'This may also happen with a binary MySQL distribution if you don't start MySQL by executing exactly
./bin/safe_mysqld.
See section 4.7.2 safe_mysqld, The Wrapper Around mysqld.
You might need to run mysql_install_db as root. However,
if you prefer, you can run the MySQL server as an unprivileged
(non-root) user, provided that the user can read and write files in
the database directory. Instructions for running MySQL as an
unprivileged user are given in section A.3.2 How to Run MySQL As a Normal User.
If you have problems with mysql_install_db, see
section 2.4.1 Problems Running mysql_install_db.
There are some alternatives to running the mysql_install_db
script as it is provided in the MySQL distribution:
mysql_install_db before running it, to change
the initial privileges that are installed into the grant tables. This is
useful if you want to install MySQL on a lot of machines with the
same privileges. In this case you probably should need only to add a few
extra INSERT statements to the mysql.user and mysql.db
tables.
mysql_install_db, then use mysql -u root mysql to
connect to the grant tables as the MySQL root user and issue
SQL statements to modify the grant tables directly.
mysql_install_db.
shell> cd mysql_installation_directory shell> bin/safe_mysqld &If you have problems starting the server, see section 2.4.2 Problems Starting the MySQL Server.
mysqladmin to verify that the server is running. The following
commands provide a simple test to check that the server is up and responding
to connections:
shell> BINDIR/mysqladmin version shell> BINDIR/mysqladmin variablesThe output from
mysqladmin version varies slightly depending on your
platform and version of MySQL, but should be similar to that shown here:
shell> BINDIR/mysqladmin version mysqladmin Ver 8.14 Distrib 3.23.32, for linux on i586 Copyright (C) 2000 MySQL AB & MySQL Finland AB & TCX DataKonsult AB This software comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to modify and redistribute it under the GPL license. Server version 3.23.32-debug Protocol version 10 Connection Localhost via Unix socket TCP port 3306 UNIX socket /tmp/mysql.sock Uptime: 16 sec Threads: 1 Questions: 9 Slow queries: 0 Opens: 7 Flush tables: 2 Open tables: 0 Queries per second avg: 0.000 Memory in use: 132K Max memory used: 16773KTo get a feeling for what else you can do with
BINDIR/mysqladmin,
invoke it with the --help option.
shell> BINDIR/mysqladmin -u root shutdown
safe_mysqld or
by invoking mysqld directly. For example:
shell> BINDIR/safe_mysqld --log &If
safe_mysqld fails, try running it from the MySQL
installation directory (if you are not already there). If that doesn't work,
see section 2.4.2 Problems Starting the MySQL Server.
shell> BINDIR/mysqlshow +-----------+ | Databases | +-----------+ | mysql | +-----------+ shell> BINDIR/mysqlshow mysql Database: mysql +--------------+ | Tables | +--------------+ | columns_priv | | db | | func | | host | | tables_priv | | user | +--------------+ shell> BINDIR/mysql -e "SELECT host,db,user FROM db" mysql +------+--------+------+ | host | db | user | +------+--------+------+ | % | test | | | % | test_% | | +------+--------+------+There is also a benchmark suite in the `sql-bench' directory (under the MySQL installation directory) that you can use to compare how MySQL performs on different platforms. The benchmark suite is written in Perl, using the Perl DBI module to provide a database-independent interface to the various databases. The following additional Perl modules are required to run the benchmark suite:
DBI DBD-mysql Data-Dumper Data-ShowTableThese modules can be obtained from CPAN http://www.cpan.org/. See section 2.7.1 Installing Perl on Unix. The `sql-bench/Results' directory contains the results from many runs against different databases and platforms. To run all tests, execute these commands:
shell> cd sql-bench shell> run-all-testsIf you don't have the `sql-bench' directory, you are probably using an RPM for a binary distribution. (Source distribution RPMs include the benchmark directory.) In this case, you must first install the benchmark suite before you can use it. Beginning with MySQL Version 3.22, there are benchmark RPM files named `mysql-bench-VERSION-i386.rpm' that contain benchmark code and data. If you have a source distribution, you can also run the tests in the `tests' subdirectory. For example, to run `auto_increment.tst', do this:
shell> BINDIR/mysql -vvf test < ./tests/auto_increment.tstThe expected results are shown in the `./tests/auto_increment.res' file.
mysql_install_db
The purpose of the mysql_install_db script is to generate new
MySQL privilege tables. It will not affect any other data.
It will also not do anything if you already have MySQL privilege
tables installed.
If you want to re-create your privilege tables, you should take down
the mysqld server, if it's running, and then do something like:
mv mysql-data-directory/mysql mysql-data-directory/mysql-old mysql_install_db
This section lists problems you might encounter when you run
mysql_install_db:
mysql_install_db doesn't install the grant tables
mysql_install_db fails to install the grant
tables and terminates after displaying the following messages:
starting mysqld daemon with databases from XXXXXX mysql daemon endedIn this case, you should examine the log file very carefully. The log should be located in the directory `XXXXXX' named by the error message, and should indicate why
mysqld didn't start. If you don't understand
what happened, include the log when you post a bug report using
mysqlbug.
See section 1.7.1.3 How to Report Bugs or Problems.
mysqld daemon running
mysql_install_db at
all. You have to run mysql_install_db only once, when you install
MySQL the first time.
mysqld daemon doesn't work when one daemon is running
Can't start server: Bind on
TCP/IP port: Address already in use or Can't start server: Bind on
unix socket.... See section 4.1.3 Installing Many Servers on the Same Machine.
mysql_install_db or when
starting or using mysqld.
You can specify a different socket and temporary directory as follows:
shell> TMPDIR=/some_tmp_dir/ shell> MYSQL_UNIX_PORT=/some_tmp_dir/mysqld.sock shell> export TMPDIR MYSQL_UNIX_PORTSee section A.4.5 How to Protect or Change the MySQL Socket File `/tmp/mysql.sock'. `some_tmp_dir' should be the path to some directory for which you have write permission. See section F Environment Variables. After this you should be able to run
mysql_install_db and start
the server with these commands:
shell> scripts/mysql_install_db shell> BINDIR/safe_mysqld &
mysqld crashes immediately
glibc older than
2.0.7-5, you should make sure you have installed all glibc patches.
There is a lot of information about this in the MySQL mail
archives. Links to the mail archives are available online at
http://lists.mysql.com/.
Also, see section 2.6.1 Linux Notes (All Linux Versions).
You can also start mysqld manually using the --skip-grant-tables
option and add the privilege information yourself using mysql:
shell> BINDIR/safe_mysqld --skip-grant-tables & shell> BINDIR/mysql -u root mysqlFrom
mysql, manually execute the SQL commands in
mysql_install_db. Make sure you run mysqladmin
flush-privileges or mysqladmin reload afterward to tell the server to
reload the grant tables.
If you are going to use tables that support transactions (InnoDB, BDB), you should first create a `my.cnf' file and set startup options for the table types you plan to use. See section 7 MySQL Table Types.
Generally, you start the mysqld server in one of these ways:
mysql.server. This script is used primarily at
system startup and shutdown, and is described more fully in
section 2.4.3 Starting and Stopping MySQL Automatically.
safe_mysqld, which tries to determine the proper options
for mysqld and then runs it with those options. See section 4.7.2 safe_mysqld, The Wrapper Around mysqld.
mysqld directly.
When the mysqld daemon starts up, it changes the directory to the
data directory. This is where it expects to write log files and the pid
(process ID) file, and where it expects to find databases.
The data directory location is hardwired in when the distribution is
compiled. However, if mysqld expects to find the data directory
somewhere other than where it really is on your system, it will not work
properly. If you have problems with incorrect paths, you can find out
what options mysqld allows and what the default path settings are by
invoking mysqld with the --help option. You can override the
defaults by specifying the correct pathnames as command-line arguments to
mysqld. (These options can be used with safe_mysqld as well.)
Normally you should need to tell mysqld only the base directory under
which MySQL is installed. You can do this with the --basedir
option. You can also use --help to check the effect of changing path
options (note that --help must be the final option of the
mysqld command). For example:
shell> EXECDIR/mysqld --basedir=/usr/local --help
Once you determine the path settings you want, start the server without
the --help option.
Whichever method you use to start the server, if it fails to start up
correctly, check the log file to see if you can find out why. Log files
are located in the data directory (typically
`/usr/local/mysql/data' for a binary distribution,
`/usr/local/var' for a source distribution, and
`\mysql\data\mysql.err' on Windows). Look in the data directory for
files with names of the form `host_name.err' and
`host_name.log' where host_name is the name of your server
host. Then check the last few lines of these files:
shell> tail host_name.err shell> tail host_name.log
Look for something like the following in the log file:
000729 14:50:10 bdb: Recovery function for LSN 1 27595 failed 000729 14:50:10 bdb: warning: ./test/t1.db: No such file or directory 000729 14:50:10 Can't init databases
This means that you didn't start mysqld with --bdb-no-recover
and Berkeley DB found something wrong with its log files when it
tried to recover your databases. To be able to continue, you should
move away the old Berkeley DB log file from the database directory to
some other place, where you can later examine it. The log files are
named `log.0000000001', where the number will increase over time.
If you are running mysqld with BDB table support and mysqld core
dumps at start this could be because of some problems with the BDB
recover log. In this case you can try starting mysqld with
--bdb-no-recover. If this helps, then you should remove all
`log.*' files from the data directory and try starting mysqld
again.
If you get the following error, it means that some other program (or another
mysqld server) is already using the TCP/IP port or socket
mysqld is trying to use:
Can't start server: Bind on TCP/IP port: Address already in use or Can't start server : Bind on unix socket...
Use ps to make sure that you don't have another mysqld server
running. If you can't find another server running, you can try to execute
the command telnet your-host-name tcp-ip-port-number and press
Enter a couple of times. If you don't get an error message like
telnet: Unable to connect to remote host: Connection refused,
something is using the TCP/IP port mysqld is trying to use.
See section 2.4.1 Problems Running mysql_install_db and section 4.1.4 Running Multiple MySQL Servers on the Same Machine.
If mysqld is currently running, you can find out what path settings
it is using by executing this command:
shell> mysqladmin variables
or
shell> mysqladmin -h 'your-host-name' variables
If you get Errcode 13, which means Permission denied, when
starting mysqld this means that you didn't have the right to
read/create files in the MySQL database or log directory. In this case
you should either start mysqld as the root user or change the
permissions for the involved files and directories so that you have the
right to use them.
If safe_mysqld starts the server but you can't connect to it,
you should make sure you have an entry in `/etc/hosts' that looks like
this:
127.0.0.1 localhost
This problem occurs only on systems that don't have a working thread library and for which MySQL must be configured to use MIT-pthreads.
If you can't get mysqld to start you can try to make a trace file
to find the problem. See section E.1.2 Creating Trace Files.
If you are using InnoDB tables, refer to the InnoDB-specific startup options. See section 7.5.3 InnoDB Startup Options.
If you are using BDB (Berkeley DB) tables, you should familiarise
yourself with the different BDB-specific startup options. See section 7.6.3 BDB startup options.
The mysql.server and safe_mysqld scripts can be used to start
the server automatically at system startup time. mysql.server can also
be used to stop the server.
The mysql.server script can be used to start or stop the server
by invoking it with start or stop arguments:
shell> mysql.server start shell> mysql.server stop
mysql.server can be found in the `share/mysql' directory
under the MySQL installation directory or in the `support-files'
directory of the MySQL source tree.
Before mysql.server starts the server, it changes the directory to
the MySQL installation directory, then invokes safe_mysqld.
You might need to edit mysql.server if you have a binary distribution
that you've installed in a non-standard location. Modify it to cd
into the proper directory before it runs safe_mysqld. If you want the
server to run as some specific user, add an appropriate user line
to the `/etc/my.cnf' file, as shown later in this section.
mysql.server stop brings down the server by sending a signal to it.
You can also take down the server manually by executing
mysqladmin shutdown.
You need to add these start and stop commands to the appropriate places in your `/etc/rc*' files when you want to start up MySQL automatically on your server.
On most current Linux distributions, it is sufficient to copy the file
mysql.server into the `/etc/init.d' directory (or
`/etc/rc.d/init.d' on older Red Hat systems). Afterwards, run the
following command to enable the startup of MySQL on system bootup:
shell> chkconfig --add mysql.server
As an alternative to the above, some operating systems also use `/etc/rc.local' or `/etc/init.d/boot.local' to start additional services on bootup. To start up MySQL using this method, you could append something like the following to it:
/bin/sh -c 'cd /usr/local/mysql ; ./bin/safe_mysqld --user=mysql &'
You can also add options for mysql.server in a global
`/etc/my.cnf' file. A typical `/etc/my.cnf' file might look like
this:
[mysqld] datadir=/usr/local/mysql/var socket=/var/tmp/mysql.sock port=3306 user=mysql [mysql_server] basedir=/usr/local/mysql
The mysql.server script understands the following options:
datadir, basedir, and pid-file.
The following table shows which option groups each of the startup scripts read from option files:
| Script | Option groups |
mysqld | mysqld and server
|
mysql.server | mysql.server, mysqld, and server
|
safe_mysqld | mysql.server, mysqld, and server
|
See section 4.1.2 `my.cnf' Option Files.
You can always move the MySQL form and datafiles between different
versions on the same architecture as long as you have the same base
version of MySQL. The current base version is 4. If you change the
character set when running MySQL (which may also change the sort order),
you must run myisamchk -r -q --set-character-set=charset on all
tables. Otherwise, your indexes may not be ordered correctly.
If you are afraid of new versions, you can always rename your old
mysqld to something like mysqld-old-version-number.
If your new mysqld then does something unexpected, you can simply
shut it down and restart with your old mysqld.
When you do an upgrade you should also back up your old databases, of course.
If, after an upgrade, you experience problems with recompiled client programs,
like Commands out of sync or unexpected core dumps, you probably have
used an old header or library file when compiling your programs. In this
case you should check the date for your `mysql.h' file and
`libmysqlclient.a' library to verify that they are from the new
MySQL distribution. If not, please recompile your programs.
If you get some problems that the new mysqld server doesn't want to
start or that you can't connect without a password, check that you don't
have some old `my.cnf' file from your old installation. You can
check this with: program-name --print-defaults. If this outputs
anything other than the program name, you have an active `my.cnf'
file that will affect things.
It is a good idea to rebuild and reinstall the Perl DBD-mysql
module whenever you install a new release of MySQL. The same applies for
Python MySQLdb.
Some visible things have changed between MySQL 4.0 and MySQL 4.1 to fix some critical bugs and make MySQL more compatible with the ANSI SQL standard.
Instead of adding options (and a lot of code) to try to make 4.1 behave like 4.0 we have taken another approach:
We have added to the later MySQL 4.0 releases the --new mysqld
startup option,, which gives you the 4.1 behaviour for the most critical
changes. You can also set this for just one thread with the SET
@@new=1 command.
If you belive that some of the following changes will affect you when
you upgrade to 4.1, we recommend you to instead download the latest
MySQL 4.0 version and fix that your application works in the --new
mode. This way you will later have a smooth painless upgrade.
In MySQL 4.1 we have done the following things that may affect some applications:
TIMESTAMP is now returned as string with the format
'YYYY-MM-DD HH:MM:DD'. If you want to have this as a number (like
you Version 4.0 does) should add +0 to the timestamp column. Different
timestamp lengths are not supported.
This change was necessary for SQL standards compliance. In a future
version, a further change will be made (backward compatible with this
change), allowing the timestamp length to indicate the desired number of
digits of fractions of a second.
From version 4.0.12, the --new option can be used to make the
server behave as 4.1.
0xFFDF) are now assumed to be strings instead of
numbers. This fixes some problems with character sets where it's
convenient to input the string as a binary item.
After this change you have to convert the binary string to
INTEGER with a CAST if you want to compare two binary
items with eachother and know which one is bigger than the other.
SELECT CAST(0xfeff AS UNSIGNED INTEGER) < CAST(0xff AS UNSIGNED
INTEGER). Using binary items in a number context or comparing them with
= should work as before.
From version 4.0.13, the --new option can be used to make the server behave as 4.1 in this aspect.
AUTO_INCREMENT columns can't take DEFAULT values. (In 4.0
these where just silently ignored).
SERIALIZE option has ben removed from the SQL_MODE
variable. One should instead use SET TRANSACTION ISOLATION LEVEL
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE.
In general, what you have to do when upgrading to 4.1 from an earlier MySQL version:
mysql_fix_privilege_tables to generate the new
password field that is needed for secure handling of passwords.
The following is a lists of things that you have to watch out for when upgrading to version 4.1;
DATE, DATETIME or TIME
result is now fixed up when returned to the client.
mysql> SELECT cast("2001-1-1" as DATE)
-> '2001-01-01'
SHOW CREATE TABLE and mysqldump.
(MySQL 4.0.6 and above can read the new dump files, but not previous
MySQL versions).
TIMESTAMP is now returned as string with the format
'YYYY-MM-DD HH:MM:DD'. If you want to have this as a number you
should add +0 to the timestamp column. Different timestamp lengths are
not supported. See section 2.5.1.1 Preparing to Upgrade From Version 4.0 to 4.1.
0xFFDF) are now assumed to be strings instead of
numbers. See section 2.5.1.1 Preparing to Upgrade From Version 4.0 to 4.1.
--shared_memory_base_name option on all machines.
Note that the table definition format (.frm) has changed
slightly in 4.1. MySQL 4.0.11 can read the new .frm format but older
version can not. If you need to move tables from 4.1 to an earlier
MySQL version you should use mysqldump. See section 4.8.5 mysqldump, Dumping Table Structure and Data.
If you are running MySQL Server on Windows, please also see section 2.5.7 Upgrading MySQL under Windows.
In general, what you have to do when upgrading to 4.0 from an earlier MySQL version:
mysql_fix_privilege_tables to add new
privileges and features
to the MySQL privilege tables.
mysql_convert_table_format database. Note that this should only
be run if all tables in the given database is ISAM or MyISAM tables. If
this is not the case you should run ALTER TABLE table_name TYPE=MyISAM
on all ISAM tables.
Perl DBD-mysql). If you have, you should recompile
them as structures used in `libmysqlclient.so' have changed.
The same applies for Python MySQLdb.
MySQL 4.0 will work even if you don't do the above, but you will not be able to use the new security privileges that MySQL 4.0 and you may run into problems when upgrading later to MySQL 4.1 or newer. The ISAM file format still works in MySQL 4.0 but it's deprecated and will be disabled in MySQL 5.0.
Old clients should work with a Version 4.0 server without any problems.
Even if you do the above, you can still downgrade to MySQL 3.23.52 or
newer if you run into problems with the MySQL 4.0 series. In this case
you have to do a mysqldump of any tables using a full-text index
and restore these in 3.23 (because 4.0 uses a new format for full-text
index).
The following is a more complete lists tell what you have to watch out for when upgrading to version 4.0;
mysql.user table.
See section 4.3.1 GRANT and REVOKE Syntax.
To get these new privileges to work, one must run the
mysql_fix_privilege_tables script. Until this script is run all
users have the SHOW DATABASES, CREATE TEMPORARY TABLES,
and LOCK TABLES privileges. SUPER and EXECUTE
privileges take their value from PROCESS.
REPLICATION SLAVE and REPLICATION CLIENT take their
values from FILE.
If you have any scripts that creates new users, you may want to change
them to use the new privileges. If you are not using GRANT
commands in the scripts, this is a good time to change your scripts.
In version 4.0.2 the option --safe-show-database is deprecated
(and no longer does anything). See section 4.2.3 Startup Options for mysqld Concerning Security.
If you get access denied errors for new users in version 4.0.2, you
should check if you need some of the new grants that you didn't need
before. In particular, you will need REPLICATION SLAVE
(instead of FILE) for new slaves.
myisam_max_extra_sort_file_size and
myisam_max_extra_sort_file_size are now given in bytes
(was megabytes before 4.0.3).
External system locking of MyISAM/ISAM files is now turned off by default.
One can turn this on by doing --external-locking. (For most users
this is never needed.)
| From | to. |
myisam_bulk_insert_tree_size | bulk_insert_buffer_size
|
query_cache_startup_type | query_cache_type
|
record_buffer | read_buffer_size
|
record_rnd_buffer | read_rnd_buffer_size
|
sort_buffer | sort_buffer_size
|
warnings | log-warnings
|
err-log | --log-error (for mysqld_safe)
|
record_buffer, sort_buffer and
warnings will still work in MySQL 4.0 but are deprecated.
| From | To. |
SQL_BIG_TABLES | BIG_TABLES
|
SQL_LOW_PRIORITY_UPDATES | LOW_PRIORITY_UPDATES
|
SQL_MAX_JOIN_SIZE | MAX_JOIN_SIZE
|
SQL_QUERY_CACHE_TYPE | QUERY_CACHE_TYPE
|
SET GLOBAL SQL_SLAVE_SKIP_COUNTER=# instead of
SET SQL_SLAVE_SKIP_COUNTER=#.
--skip-locking to
--skip-external-locking and --enable-locking to
--external-locking.
SHOW MASTER STATUS now returns an empty set if binary log is not
enabled.
SHOW SLAVE STATUS now returns an empty set if slave is not initialised.
--temp-pool enabled by default as this
gives better performance with some OS (Most notable Linux).
DOUBLE and FLOAT columns now honour the
UNSIGNED flag on storage (before, UNSIGNED was ignored for
these columns).
ORDER BY column DESC now always sorts NULL values
first; in 3.23 this was not always consistent. Note: MySQL 4.0.11 restored
the original behaviour.
SHOW INDEX has 2 columns more (Null and Index_type)
than it had in 3.23.
CHECK, SIGNED, LOCALTIME and LOCALTIMESTAMP
are now reserved words.
|, &, <<,
>>, and ~ is now unsigned. This may cause problems if you
are using them in a context where you want a signed result.
See section 6.3.5 Cast Functions.
UNSIGNED, the result will be unsigned. In other
words, before upgrading to MySQL 4.0, you should check your application
for cases where you are subtracting a value from an unsigned entity and
want a negative answer or subtracting an unsigned value from an
integer column. You can disable this behaviour by using the
--sql-mode=NO_UNSIGNED_SUBTRACTION option when starting
mysqld. See section 6.3.5 Cast Functions.
MATCH ... AGAINST (... IN BOOLEAN MODE) with your tables,
you need to rebuild them with REPAIR TABLE table_name USE_FRM.
LOCATE() and INSTR() are case-sensitive if one of the
arguments is a binary string. Otherwise they are case-insensitive.
STRCMP() now uses the current character set when doing comparisons,
which means that the default comparison behaviour now is case-insensitive.
HEX(string) now returns the characters in string converted to
hexadecimal. If you want to convert a number to hexadecimal, you should
ensure that you call HEX() with a numeric argument.
INSERT INTO ... SELECT always had IGNORE enabled.
In 4.0.1, MySQL will stop (and possibly roll back) in case of an error if you
don't specify IGNORE.
safe_mysqld as a symlink to
mysqld_safe.
mysql_drop_db, mysql_create_db, and
mysql_connect are no longer supported unless you compile
MySQL with CFLAGS=-DUSE_OLD_FUNCTIONS. Instead of doing this,
it is preferable to change the client to use the new 4.0 API.
MYSQL_FIELD structure, length and max_length have
changed from unsigned int to unsigned long. This should not
cause any problems, except that they may generate warning messages when
used as arguments in the printf() class of functions.
TRUNCATE TABLE when you want to delete all rows
from a table and you don't care how many rows were deleted.
(Because TRUNCATE TABLE is faster than DELETE FROM table_name).
LOCK TABLES or
transaction when trying to execute TRUNCATE TABLE or DROP
DATABASE.
SHOW OPEN TABLE has changed.
mysql_thread_init() and
mysql_thread_end(). See section 8.1.14 How to Make a Threaded Client.
DBD-mysql version 1.2218 or newer because the older DBD modules
used the deprecated drop_db() call.
Version 2.1022 or newer is recommended.
RAND(seed) returns a different random number series in 4.0 than in
3.23; this was done to further differentiate RAND(seed) and
RAND(seed+1).
IFNULL(A,B) is now set to be the
more 'general' of the types of A and B. (The order is
STRING, REAL or INTEGER).
If you are running MySQL Server on Windows, please also see section 2.5.7 Upgrading MySQL under Windows.
MySQL Version 3.23 supports tables of the new MyISAM type and
the old ISAM type. You don't have to convert your old tables to
use these with Version 3.23. By default, all new tables will be created with
type MyISAM (unless you start mysqld with the
--default-table-type=isam option). You can change an ISAM
table to a MyISAM table with ALTER TABLE table_name TYPE=MyISAM
or the Perl script mysql_convert_table_format.
Version 3.22 and 3.21 clients will work without any problems with a Version 3.23 server.
The following list tells what you have to watch out for when upgrading to Version 3.23:
tis620 character set must be fixed
with myisamchk -r or REPAIR TABLE.
DROP DATABASE on a symbolic linked database, both the
link and the original database are deleted. (This didn't happen in 3.22
because configure didn't detect the readlink system call.)
OPTIMIZE TABLE now works only for MyISAM tables.
For other table types, you can use ALTER TABLE to optimise the table.
During OPTIMIZE TABLE the table is now locked from other threads.
mysql is now by default started with the
option --no-named-commands (-g). This option can be disabled with
--enable-named-commands (-G). This may cause incompatibility problems in
some cases@-for example, in SQL scripts that use named commands without a
semicolon. Long format commands still work from the first line.
MONTH()) will now
return 0 for 0000-00-00 dates. (MySQL 3.22 returned NULL.)
german character sort order, you must repair
all your tables with isamchk -r, as we have made some changes in
the sort order.
IF will now depend on both arguments
and not only the first argument.
AUTO_INCREMENT will not work with negative numbers. The reason
for this is that negative numbers caused problems when wrapping from -1 to 0.
AUTO_INCREMENT for MyISAM tables is now handled at a lower level and
is much faster than before. In addition, for MyISAM tables, old numbers are
no longer reused, even if you delete rows from the table.
CASE, DELAYED, ELSE, END, FULLTEXT,
INNER, RIGHT, THEN, and WHEN are now reserved words.
FLOAT(X) is now a true floating-point type and not a value with a
fixed number of decimals.
DECIMAL(length,dec) the length argument no longer
includes a place for the sign or the decimal point.
TIME string must now be of one of the following formats:
[[[DAYS] [H]H:]MM:]SS[.fraction] or
[[[[[H]H]H]H]MM]SS[.fraction].
LIKE now compares strings using the same character comparison rules
as =. If you require the old behaviour, you can compile
MySQL with the CXXFLAGS=-DLIKE_CMP_TOUPPER flag.
REGEXP is now case-insensitive for normal (not binary) strings.
CHECK TABLE
or myisamchk for MyISAM tables (`.MYI') and
isamchk for ISAM (`.ISM') tables.
mysqldump files to be compatible between
MySQL Version 3.22 and Version 3.23, you should not use the
--opt or --all option to mysqldump.
DATE_FORMAT() to make sure there is a
`%' before each format character.
(MySQL Version 3.22 and later already allowed this syntax.)
mysql_fetch_fields_direct is now a function (it was a macro) and
it returns a pointer to a MYSQL_FIELD instead of a
MYSQL_FIELD.
mysql_num_fields() can no longer be used on a MYSQL* object (it's
now a function that takes MYSQL_RES* as an argument, so you should
use mysql_field_count() instead).
SELECT DISTINCT ... was
almost always sorted. In Version 3.23, you must use GROUP BY or
ORDER BY to obtain sorted output.
SUM() now returns NULL, instead of 0, if
there are no matching rows. This is required by SQL-99.
AND or OR with NULL values will now return
NULL instead of 0. This mostly affects queries that use NOT
on an AND/OR expression as NOT NULL = NULL.
LPAD() and RPAD() will shorten the result string if it's longer
than the length argument.
Nothing that affects compatibility has changed between versions 3.21 and 3.22.
The only pitfall is that new tables that are created with DATE type
columns will use the new way to store the date. You can't access these new
fields from an old version of mysqld.
After installing MySQL Version 3.22, you should start the new server
and then run the mysql_fix_privilege_tables script. This will add the
new privileges that you need to use the GRANT command. If you forget
this, you will get Access denied when you try to use ALTER
TABLE, CREATE INDEX, or DROP INDEX. If your MySQL root
user requires a password, you should give this as an argument to
mysql_fix_privilege_tables.
The C API interface to mysql_real_connect() has changed. If you have
an old client program that calls this function, you must place a 0 for
the new db argument (or recode the client to send the db
element for faster connections). You must also call mysql_init()
before calling mysql_real_connect(). This change was done to allow
the new mysql_options() function to save options in the MYSQL
handler structure.
The mysqld variable key_buffer has changed names to
key_buffer_size, but you can still use the old name in your
startup files.
If you are running a version older than Version 3.20.28 and want to switch to Version 3.21, you need to do the following:
You can start the mysqld Version 3.21 server with safe_mysqld
--old-protocol to use it with clients from a Version 3.20 distribution.
In this case, the new client function mysql_errno() will not
return any server error, only CR_UNKNOWN_ERROR (but it
works for client errors), and the server uses the old password()
checking rather than the new one.
If you are not using the --old-protocol option to
mysqld, you will need to make the following changes:
MyODBC 2.x driver.
scripts/add_long_password must be run to convert the
Password field in the mysql.user table to CHAR(16).
mysql.user table (to get 62-bit
rather than 31-bit passwords).
MySQL Version 3.20.28 and above can handle the new user table
format without affecting clients. If you have a MySQL version earlier
than Version 3.20.28, passwords will no longer work with it if you convert the
user table. So to be safe, you should first upgrade to at least Version
3.20.28 and then upgrade to Version 3.21.
The new client code works with a 3.20.x mysqld server, so
if you experience problems with 3.21.x, you can use the old 3.20.x server
without having to recompile the clients again.
If you are not using the --old-protocol option to mysqld,
old clients will issue the error message:
ERROR: Protocol mismatch. Server Version = 10 Client Version = 9
The new Perl DBI/DBD interface also supports the old
mysqlperl interface. The only change you have to make if you use
mysqlperl is to change the arguments to the connect() function.
The new arguments are: host, database, user,
and password (the user and password arguments have changed
places).
See section 8.5.2 The DBI Interface.
The following changes may affect queries in old applications:
HAVING must now be specified before any ORDER BY clause.
LOCATE() have been swapped.
DATE,
TIME, and TIMESTAMP.
If you are using MySQL Version 3.23, you can copy the `.frm', `.MYI', and `.MYD' files between different architectures that support the same floating-point format. (MySQL takes care of any byte-swapping issues.)
The MySQL ISAM data and index files (`.ISD' and
`*.ISM', respectively) are architecture-dependent and in some cases
OS-dependent. If you want to move your applications to another machine
that has a different architecture or OS than your current machine, you
should not try to move a database by simply copying the files to the
other machine. Use mysqldump instead.
By default, mysqldump will create a file full of SQL statements.
You can then transfer the file to the other machine and feed it as input
to the mysql client.
Try mysqldump --help to see what options are available.
If you are moving the data to a newer version of MySQL, you should use
mysqldump --opt with the newer version to get a fast, compact dump.
The easiest (although not the fastest) way to move a database between two machines is to run the following commands on the machine on which the database is located:
shell> mysqladmin -h 'other hostname' create db_name
shell> mysqldump --opt db_name \
| mysql -h 'other hostname' db_name
If you want to copy a database from a remote machine over a slow network, you can use:
shell> mysqladmin create db_name
shell> mysqldump -h 'other hostname' --opt --compress db_name \
| mysql db_name
You can also store the result in a file, then transfer the file to the target machine and load the file into the database there. For example, you can dump a database to a file on the source machine like this:
shell> mysqldump --quick db_name | gzip > db_name.contents.gz
(The file created in this example is compressed.) Transfer the file containing the database contents to the target machine and run these commands there:
shell> mysqladmin create db_name shell> gunzip < db_name.contents.gz | mysql db_name
You can also use mysqldump and mysqlimport to accomplish
the database transfer.
For big tables, this is much faster than simply using mysqldump.
In the following commands, DUMPDIR represents the full pathname
of the directory you use to store the output from mysqldump.
First, create the directory for the output files and dump the database:
shell> mkdir DUMPDIR shell> mysqldump --tab=DUMPDIR db_name
Then transfer the files in the DUMPDIR directory to some corresponding
directory on the target machine and load the files into MySQL
there:
shell> mysqladmin create db_name # create database shell> cat DUMPDIR/*.sql | mysql db_name # create tables in database shell> mysqlimport db_name DUMPDIR/*.txt # load data into tables
Also, don't forget to copy the mysql database because that's where the
grant tables (user, db, host) are stored. You may have
to run commands as the MySQL root user on the new machine
until you have the mysql database in place.
After you import the mysql database on the new machine, execute
mysqladmin flush-privileges so that the server reloads the grant table
information.
When upgrading MySQL under Windows, please follow the below steps:
NET STOP mysql).
C:\mysql4.
basedir parameter in the `my.ini' file of your Windows
directory (e.g. C:\WINNT).
NET START mysql).
Possible error situations:
A system error has occurred. System error 1067 has occurred. The process terminated unexpectedly.
This cryptic error means that your `my.cnf' file (by default `C:\my.cnf') contains a parameter that cannot be recognised by MySQL. You can verify that this is the case by trying to restart MySQL with the `my.cnf' file renamed e.g. to `my.cnf.old'. Once you have verified it, you need to identify which parameter is the culprit by retrying to start MySQL with successively larger portions of the original `my.cnf' file.
The following notes regarding glibc apply only to the situation
when you build MySQL
yourself. If you are running Linux on an x86 machine, in most cases it is
much better for you to just use our binary. We link our binaries against
the best patched version of glibc we can come up with and with the
best compiler options, in an attempt to make it suitable for a high-load
server. So if you read the following text, and are in doubt about
what you should do, try our binary first to see if it meets your needs, and
worry about your own build only after you have discovered that our binary is
not good enough. In that case, we would appreciate a note about it, so we
can build a better binary next time. For a typical user, even for setups with
a lot of concurrent connections and/or tables exceeding the 2G limit, our
binary in most cases is the best choice.
MySQL uses LinuxThreads on Linux. If you are using an old
Linux version that doesn't have glibc2, you must install
LinuxThreads before trying to compile MySQL. You can get
LinuxThreads at http://www.mysql.com/downloads/os-linux.html.
Note: we have seen some strange problems with Linux 2.2.14 and MySQL on SMP systems. If you have a SMP system, we recommend you upgrade to Linux 2.4 as soon as possible. Your system will be faster and more stable by doing this.
Note that glibc versions before and including Version 2.1.1 have
a fatal bug in pthread_mutex_timedwait handling, which is used
when you do INSERT DELAYED. We recommend that you not use
INSERT DELAYED before upgrading glibc.
If you plan to have 1000+ concurrent connections, you will need to make
some changes to LinuxThreads, recompile it, and relink MySQL against
the new `libpthread.a'. Increase PTHREAD_THREADS_MAX in
`sysdeps/unix/sysv/linux/bits/local_lim.h' to 4096 and decrease
STACK_SIZE in `linuxthreads/internals.h' to 256 KB. The paths are
relative to the root of glibc Note that MySQL will not be
stable with around 600-1000 connections if STACK_SIZE is the default
of 2 MB.
If MySQL can't open enough files, or connections, it may be that you haven't configured Linux to handle enough files.
In Linux 2.2 and onward, you can check the number of allocated file handles by doing:
cat /proc/sys/fs/file-max cat /proc/sys/fs/dquot-max cat /proc/sys/fs/super-max
If you have more than 16 MB of memory, you should add something like the following to your init scripts (e.g. `/etc/init.d/boot.local' on SuSE Linux):
echo 65536 > /proc/sys/fs/file-max echo 8192 > /proc/sys/fs/dquot-max echo 1024 > /proc/sys/fs/super-max
You can also run the preceding commands from the command-line as root, but these settings will be lost the next time your computer reboots.
Alternatively, you can set these parameters on bootup by using the
sysctl tool, which is used by many Linux distributions (SuSE has
added it as well, beginning with SuSE Linux 8.0). Just put the following
values into a file named `/etc/sysctl.conf':
# Increase some values for MySQL fs.file-max = 65536 fs.dquot-max = 8192 fs.super-max = 1024
You should also add the following to `/etc/my.cnf':
[safe_mysqld] open-files-limit=8192
This should allow MySQL to create up to 8192 connections + files.
The STACK_SIZE constant in LinuxThreads controls the spacing of thread
stacks in the address space. It needs to be large enough so that there will
be plenty of room for the stack of each individual thread, but small enough
to keep the stack of some threads from running into the global mysqld
data. Unfortunately, the Linux implementation of mmap(), as we have
experimentally discovered, will successfully unmap an already mapped region
if you ask it to map out an address already in use, zeroing out the data
on the entire page, instead of returning an error. So, the safety of
mysqld or any other threaded application depends on the "gentleman"
behaviour of the code that creates threads. The user must take measures to
make sure the number of running threads at any time is sufficiently low for
thread stacks to stay away from the global heap. With mysqld, you
should enforce this "gentleman" behaviour by setting a reasonable value for
the max_connections variable.
If you build MySQL yourself and do not want to mess with patching
LinuxThreads, you should set max_connections to a value no higher
than 500. It should be even less if you have a large key buffer, large
heap tables, or some other things that make mysqld allocate a lot
of memory, or if you are running a 2.2 kernel with a 2G patch. If you are
using our binary or RPM version 3.23.25 or later, you can safely set
max_connections at 1500, assuming no large key buffer or heap tables
with lots of data. The more you reduce STACK_SIZE in LinuxThreads
the more threads you can safely create. We recommend the values between
128K and 256K.
If you use a lot of concurrent connections, you may suffer from a "feature"
in the 2.2 kernel that penalises a process for forking or cloning a child
in an attempt to prevent a fork bomb attack. This will cause MySQL
not to scale well as you increase the number of concurrent clients. On
single-CPU systems, we have seen this manifested in a very slow thread
creation, which means it may take a long time to connect to MySQL
(as long as 1 minute), and it may take just as long to shut it down. On
multiple-CPU systems, we have observed a gradual drop in query speed as
the number of clients increases. In the process of trying to find a
solution, we have received a kernel patch from one of our users, who
claimed it made a lot of difference for his site. The patch is available at
http://www.mysql.com/Downloads/Patches/linux-fork.patch. We have
now done rather extensive testing of this patch on both development and
production systems. It has significantly
improved MySQL performance without causing any problems and we now
recommend it to our users who are still running high-load servers on
2.2 kernels. This issue has been fixed in the 2.4 kernel, so if you are not
satisfied with
the current performance of your system, rather than patching your 2.2 kernel,
it might be easier to just upgrade to 2.4, which will also give you a nice
SMP boost in addition to fixing this fairness bug.
We have tested MySQL on the 2.4 kernel on a 2-CPU machine and
found MySQL scales much better@-there was virtually no slowdown
on queries throughput all the way up
to 1000 clients, and the MySQL scaling factor (computed as the ratio of
maximum throughput to the throughput with one client) was 180%.
We have observed similar results on a 4-CPU system@-virtually no
slowdown as the number of
clients was increased up to 1000, and 300% scaling factor. So for a high-load
SMP server we would definitely recommend the 2.4 kernel at this point. We
have discovered that it is essential to run mysqld process with the
highest possible priority on the 2.4 kernel to achieve maximum performance.
This can be done by adding
renice -20 $$ command to safe_mysqld. In our testing on a
4-CPU machine, increasing the priority gave 60% increase in throughput with
400 clients.
We are currently also trying to collect
more info on how well MySQL performs on 2.4 kernel on 4-way and 8-way
systems. If you have access such a system and have done some benchmarks,
please send a mail to docs@mysql.com with the results - we will
include them in the manual.
There is another issue that greatly hurts MySQL performance,
especially on SMP systems. The implementation of mutex in
LinuxThreads in glibc-2.1 is very bad for programs with many
threads that only
hold the mutex for a short time. On an SMP system, ironic as it is, if
you link MySQL against unmodified LinuxThreads,
removing processors from the machine improves MySQL performance
in many cases. We have made a patch available for glibc 2.1.3
to correct this behaviour
(http://www.mysql.com/Downloads/Linux/linuxthreads-2.1-patch).
With glibc-2.2.2
MySQL version 3.23.36 will use the adaptive mutex, which is much
better than even the patched one in glibc-2.1.3. Be warned, however,
that under some conditions, the current mutex code in glibc-2.2.2
overspins, which hurts MySQL performance. The chance of this
condition can be reduced by renicing mysqld process to the highest
priority. We have also been able to correct the overspin behaviour with
a patch, available at
http://www.mysql.com/Downloads/Linux/linuxthreads-2.2.2.patch.
It combines the correction of overspin, maximum number of
threads, and stack spacing all in one. You will need to apply it in the
linuxthreads directory with
patch -p0 </tmp/linuxthreads-2.2.2.patch.
We hope it will be included in
some form in to the future releases of glibc-2.2. In any case, if
you link against glibc-2.2.2 you still need to correct
STACK_SIZE and PTHREAD_THREADS_MAX. We hope that the defaults
will be corrected to some more acceptable values for high-load
MySQL setup in the future, so that your own build can be reduced
to ./configure; make; make install.
We recommend that you use the above patches to build a special static
version of libpthread.a and use it only for statically linking
against MySQL. We know that the patches are safe for MySQL
and significantly improve its performance, but we cannot say anything
about other applications. If you link other applications against the
patched version of the library, or build a patched shared version and
install it on your system, you are doing it at your own risk with regard
to other applications that depend on LinuxThreads.
If you experience any strange problems during the installation of MySQL, or with some common utilities hanging, it is very likely that they are either library or compiler related. If this is the case, using our binary will resolve them.
One known problem with the binary distribution is that with older Linux
systems that use libc (like Red Hat 4.x or Slackware), you will get
some non-fatal problems with hostname resolution.
See section 2.6.1.1 Linux Notes for Binary Distributions.
When using LinuxThreads you will see a minimum of three processes running. These are in fact threads. There will be one thread for the LinuxThreads manager, one thread to handle connections, and one thread to handle alarms and signals.
Note that the Linux kernel and the LinuxThread library can by default only have 1024 threads. This means that you can only have up to 1021 connections to MySQL on an unpatched system. The page http://www.volano.com/linuxnotes.html contains information how to go around this limit.
If you see a dead mysqld daemon process with ps, this usually
means that you have found a bug in MySQL or you have a corrupted
table. See section A.4.1 What To Do If MySQL Keeps Crashing.
To get a core dump on Linux if mysqld dies with a SIGSEGV signal,
you can start mysqld with the --core-file option. Note
that you also probably need to raise the core file size by adding
ulimit -c 1000000 to safe_mysqld or starting
safe_mysqld with --core-file-size=1000000.
See section 4.7.2 safe_mysqld, The Wrapper Around mysqld.
If you are linking your own MySQL client and get the error:
ld.so.1: ./my: fatal: libmysqlclient.so.4: open failed: No such file or directory
When executing them, the problem can be avoided by one of the following methods:
-Lpath):
-Wl,r/path-libmysqlclient.so.
libmysqclient.so to `/usr/lib'.
LD_RUN_PATH environment variable before running your client.
If you are using the Fujitsu compiler (fcc / FCC) you will have
some problems compiling MySQL because the Linux header files are very
gcc oriented.
The following configure line should work with fcc/FCC:
CC=fcc CFLAGS="-O -K fast -K lib -K omitfp -Kpreex -D_GNU_SOURCE \ -DCONST=const -DNO_STRTOLL_PROTO" CXX=FCC CXXFLAGS="-O -K fast -K lib \ -K omitfp -K preex --no_exceptions --no_rtti -D_GNU_SOURCE -DCONST=const \ -Dalloca=__builtin_alloca -DNO_STRTOLL_PROTO \ '-D_EXTERN_INLINE=static __inline'" ./configure --prefix=/usr/local/mysql \ --enable-assembler --with-mysqld-ldflags=-all-static --disable-shared \ --with-low-memory
MySQL needs at least Linux Version 2.0.
Warning: We have reports from some MySQL users that they have got serious stability problems with MySQL with Linux kernel 2.2.14. If you are using this kernel you should upgrade to 2.2.19 (or newer) or to a 2.4 kernel. If you have a multi-cpu box, then you should seriously consider using 2.4 as this will give you a significant speed boost.
The binary release is linked with -static, which means you do not
normally need to worry about which version of the system libraries you
have. You need not install LinuxThreads, either. A program linked with
-static is slightly bigger than a dynamically linked program but
also slightly faster (3-5%). One problem, however, is that you can't use
user-definable functions (UDFs) with a statically linked program. If
you are going to write or use UDFs (this is something for C or C++
programmers only), you must compile MySQL yourself, using dynamic linking.
If you are using a libc-based system (instead of a glibc2
system), you will probably get some problems with hostname resolving and
getpwnam() with the binary release. (This is because glibc
unfortunately depends on some external libraries to resolve hostnames
and getpwent(), even when compiled with -static). In this
case you probably get the following error message when you run
mysql_install_db:
Sorry, the host 'xxxx' could not be looked up
or the following error when you try to run mysqld with the --user
option:
getpwnam: No such file or directory
You can solve this problem in one of the following ways:
tar.gz
distribution) and install this instead.
mysql_install_db --force; this will not execute the
resolveip test in mysql_install_db. The downside is that
you can't use host names in the grant tables; you must use IP numbers
instead (except for localhost). If you are using an old MySQL
release that doesn't support --force, you have to remove the
resolveip test in mysql_install with an editor.
mysqld with su instead of using --user.
The Linux-Intel binary and RPM releases of MySQL are configured for the highest possible speed. We are always trying to use the fastest stable compiler available.
MySQL Perl support requires Version Perl 5.004_03 or newer.
On some Linux 2.2 versions, you may get the error Resource
temporarily unavailable when you do a lot of new connections to a
mysqld server over TCP/IP.
The problem is that Linux has a delay between when you close a TCP/IP socket and until this is actually freed by the system. As there is only room for a finite number of TCP/IP slots, you will get the above error if you try to do too many new TCP/IP connections during a small time, like when you run the MySQL `test-connect' benchmark over TCP/IP.
We have mailed about this problem a couple of times to different Linux mailing lists but have never been able to resolve this properly.
The only known 'fix' to this problem is to use persistent connections in
your clients or use sockets, if you are running the database server
and clients on the same machine. We hope that the Linux 2.4
kernel will fix this problem in the future.
MySQL requires libc Version 5.4.12 or newer. It's known to
work with libc 5.4.46. glibc Version 2.0.6 and later should
also work. There have been some problems with the glibc RPMs from
Red Hat, so if you have problems, check whether there are any updates.
The glibc 2.0.7-19 and 2.0.7-29 RPMs are known to work.
If you are using Red Hat 8.0 or a new glibc 2.2.x library you should start
mysqld with the option --thread-stack=192K. If you don't do it
mysqld will die in gethostbyaddr() because the new glibc library
requires > 128K memory on stack for this call. This stack size is now the
default on MySQL 4.0.10 and above.
If you are using gcc 3.0 and above to compile MySQL, you must install
the libstdc++v3 library before compiling MySQL; if you don't do
this you will get an error about a missing __cxa_pure_virtual
symbol during linking.
On some older Linux distributions, configure may produce an error
like this:
Syntax error in sched.h. Change _P to __P in the /usr/include/sched.h file. See the Installation chapter in the Reference Manual.
Just do what the error message says and add an extra underscore to the
_P macro that has only one underscore, then try again.
You may get some warnings when compiling; those shown here can be ignored:
mysqld.cc -o objs-thread/mysqld.o mysqld.cc: In function `void init_signals()': mysqld.cc:315: warning: assignment of negative value `-1' to `long unsigned int' mysqld.cc: In function `void * signal_hand(void *)': mysqld.cc:346: warning: assignment of negative value `-1' to `long unsigned int'
mysql.server can be found in the `share/mysql' directory
under the MySQL installation directory or in the
`support-files' directory of the MySQL source tree.
If mysqld always core dumps when it starts up, the problem may be that
you have an old `/lib/libc.a'. Try renaming it, then remove
`sql/mysqld' and do a new make install and try again. This
problem has been reported on some Slackware installations.
If you get the following error when linking mysqld,
it means that your `libg++.a' is not installed correctly:
/usr/lib/libc.a(putc.o): In function `_IO_putc': putc.o(.text+0x0): multiple definition of `_IO_putc'
You can avoid using `libg++.a' by running configure like this:
shell> CXX=gcc ./configure
In some implementations, readdir_r() is broken. The symptom is that
SHOW DATABASES always returns an empty set. This can be fixed by
removing HAVE_READDIR_R from `config.h' after configuring and
before compiling.
Some problems will require patching your Linux installation. The patch can
be found at
http://www.mysql.com/Downloads/patches/Linux-sparc-2.0.30.diff.
This patch is against the Linux distribution `sparclinux-2.0.30.tar.gz'
that is available at vger.rutgers.edu (a version of Linux that was
never merged with the official 2.0.30). You must also install LinuxThreads
Version 0.6 or newer.
MySQL Version 3.23.12 is the first MySQL version that is tested on Linux-Alpha. If you plan to use MySQL on Linux-Alpha, you should ensure that you have this version or newer.
We have tested MySQL on Alpha with our benchmarks and test suite, and it appears to work nicely.
We currently build the MySQL binary packages on SuSE Linux 7.0 for AXP, kernel 2.4.4-SMP, Compaq C compiler (V6.2-505) and Compaq C++ compiler (V6.3-006) on a Compaq DS20 machine with an Alpha EV6 processor.
You can find the above compilers at http://www.support.compaq.com/alpha-tools/). By using these compilers, instead of gcc, we get about 9-14% better performance with MySQL.
Note that until MySQL version 3.23.52 and 4.0.2 we optimised the binary for
the current CPU only (by using the -fast compile option); this meant
that you could only use our binaries if you had an Alpha EV6 processor.
Starting with all following releases we added the -arch generic flag
to our compile options, which makes sure the binary runs on all Alpha
processors. We also compile statically to avoid library problems.
CC=ccc CFLAGS="-fast -arch generic" CXX=cxx \ CXXFLAGS="-fast -arch generic -noexceptions -nortti" \ ./configure --prefix=/usr/local/mysql --disable-shared \ --with-extra-charsets=complex --enable-thread-safe-client \ --with-mysqld-ldflags=-non_shared --with-client-ldflags=-non_shared
If you want to use egcs the following configure line worked for us:
CFLAGS="-O3 -fomit-frame-pointer" CXX=gcc \ CXXFLAGS="-O3 -fomit-frame-pointer -felide-constructors \ -fno-exceptions -fno-rtti" ./configure --prefix=/usr/local/mysql \ --disable-shared
Some known problems when running MySQL on Linux-Alpha:
gdb 4.18. You should download and use gdb 5.1 instead!
mysqld statically when using gcc, the
resulting image will core dump at start. In other words, don't
use --with-mysqld-ldflags=-all-static with gcc.
MySQL should work on MkLinux with the newest glibc package
(tested with glibc 2.0.7).
To get MySQL to work on Qube2, (Linux Mips), you need the
newest glibc libraries (glibc-2.0.7-29C2 is known to
work). You must also use the egcs C++ compiler
(egcs-1.0.2-9, gcc 2.95.2 or newer).
To get MySQL to compile on Linux IA64, we use the following compile line:
Using gcc-2.96:
CC=gcc CFLAGS="-O3 -fno-omit-frame-pointer" CXX=gcc \ CXXFLAGS="-O3 -fno-omit-frame-pointer -felide-constructors \ -fno-exceptions -fno-rtti" ./configure --prefix=/usr/local/mysql \ "--with-comment=Official MySQL binary" --with-extra-charsets=complex
On IA64 the MySQL client binaries are using shared libraries. This means
that if you install our binary distribution in some other place than
`/usr/local/mysql' you need to either modify `/etc/ld.so.conf'
or add the path to the directory where you have `libmysqlclient.so'
to the LD_LIBRARY_PATH environment variable.
See section A.3.1 Problems When Linking with the MySQL Client Library.
This section describes using MySQL on Windows. This information is also provided in the `README' file that comes with the MySQL Windows distribution. See section 2.1.2 Installing MySQL on Windows.
MySQL uses TCP/IP to connect a client to a server. (This will allow any machine on your network to connect to your MySQL server.) Because of this, you must install TCP/IP on your machine before starting MySQL. You can find TCP/IP on your Windows CD-ROM.
Note that if you are using an old Windows 95 release (for example OSR2), it's likely that you have an old Winsock package; MySQL requires Winsock 2! You can get the newest Winsock from http://www.microsoft.com/. Windows 98 has the new Winsock 2 library, so the above doesn't apply there.
To start the mysqld server, you should start an MS-DOS
window and type:
C:\> C:\mysql\bin\mysqld
This will start mysqld in the background without a window.
You can kill the MySQL server by executing:
C:\> C:\mysql\bin\mysqladmin -u root shutdown
This calls the MySQL administration utility as user `root', which is the default Administrator in the MySQL grant system. Please note that the MySQL grant system is wholly independent from any login users under Windows.
If mysqld doesn't start, please check the
`\mysql\data\mysql.err' file to see if the server wrote any
message there to indicate the cause of the problem. You can also
try to start the server with mysqld --standalone; in this
case, you may get some useful information on the screen that may
help solve the problem.
The last option is to start mysqld with
--standalone --debug.
In this case mysqld will write a log file
`C:\mysqld.trace' that should contain the reason why
mysqld doesn't start. See section E.1.2 Creating Trace Files.
Use mysqld --help to display all the options that
mysqld understands!
To get MySQL to work with TCP/IP on Windows NT 4, you must install service pack 3 (or newer)!
Normally you should install MySQL as a service on Windows NT/2000/XP. In case the server was already running, first stop it using the following command:
C:\mysql\bin> mysqladmin -u root shutdown
This calls the MySQL administration utility as user `root',
which is the default Administrator in the MySQL grant system.
Please note that the MySQL grant system is wholly independent from
any login users under Windows.
Now install the server service:
C:\mysql\bin> mysqld --install
If any options are required, they must be specified as
``Start parameters'' in the Windows Services
utility before you start the MySQL service.
The Services utility
(Windows Service Control Manager) can be found in the
Windows Control Panel (under Administrative Tools
on Windows 2000). It is advisable to close the Services utility
while performing the --install or --remove
operations, this prevents some odd errors.
For information about which server binary to run, see section 2.1.2.2 Preparing the Windows MySQL Environment.
Please note that from MySQL version 3.23.44, you have the choice
of set up the service as Manual instead (if you don't wish
the service to be started automatically during the boot process):
C:\mysql\bin> mysqld --install-manual
The service is installed with the name MySQL. Once
installed, it can be immediately started from the Services
utility, or by using the command NET START MySQL.
Once running, mysqld can be stopped using
mysqladmin, from the Services utility, or by using the
command NET STOP MySQL.
When running as a service, the operating system will automatically stop
the MySQL service on computer shutdown. In MySQL versions < 3.23.47,
Windows only waited for a few seconds for the shutdown to complete, and
killed the database server process if the time limit was exceeded
(potentially causing problems). For instance, at the next startup the
InnoDB storage engine had to do crash recovery. Starting from
MySQL version 3.23.48, the Windows will wait longer for the MySQL server
shutdown to complete. If you notice this is not enough for your
installation, it is safest to run the MySQL server not as a service, but
from the Command prompt, and shut it down with mysqladmin shutdown.
There is a problem that Windows NT (but not Windows 2000/XP) by default only
waits 20 seconds for a service to shut down, and after that kills the
service process. You can increase this default by opening the Registry
Editor `\winnt\system32\regedt32.exe' and editing the value of
WaitToKillServiceTimeout at
`HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control'
in the Registry tree. Specify the new larger value in milliseconds,
for example 120000 to have Windows NT wait up to 120 seconds.
Please note that when run as a service, mysqld
has no access to a console and so no messages can be seen.
Errors can be checked in `c:\mysql\data\mysql.err'.
If you have problems installing mysqld as a
service, try starting it with the full path:
C:\> C:\mysql\bin\mysqld --install
If this doesn't work, you can get mysqld to
start properly by fixing the path in the registry!
If you don't want to start mysqld as a service,
you can start it as follows:
C:\> C:\mysql\bin\mysqld --standalone
or
C:\> C:\mysql\bin\mysqld --standalone --debug
The last method gives you a debug trace in `C:\mysqld.trace'. See section E.1.2 Creating Trace Files.
MySQL supports TCP/IP on all Windows platforms and named pipes on
NT/2000/XP. Since named pipes are actually slower than TCP/IP, the
default is to use TCP/IP regardless of the platform and some users
have experienced problems shutting down the MySQL server when named
pipes are used. Starting from 3.23.50, named pipes are only enabled
if mysqld is started with --enable-named-pipe.
You can force a MySQL client to use named pipes by specifying the
--pipe option or by specifying . as the host name. Use the
--socket option to specify the name of the pipe.
In MySQL 4.1 you should use the --protocol=PIPE option.
You can test whether MySQL is working by executing any of the following commands:
C:\> C:\mysql\bin\mysqlshow C:\> C:\mysql\bin\mysqlshow -u root mysql C:\> C:\mysql\bin\mysqladmin version status proc C:\> C:\mysql\bin\mysql test
If mysqld is slow to answer to connections on Windows 9x/Me, there is
probably a problem with your DNS. In this case, start mysqld with
--skip-name-resolve and use only localhost and IP numbers in
the MySQL grant tables.
There are two versions of the MySQL command-line tool:
| Binary | Description |
mysql | Compiled on native Windows, offering limited text editing capabilities. |
mysqlc | Compiled with the Cygnus GNU compiler and libraries, which offers readline editing.
|
If you want to use mysqlc.exe, you must copy
`C:\mysql\lib\cygwinb19.dll' to your Windows system directory
(`\windows\system' or similar place).
The default privileges on Windows give all local users full privileges
to all databases without specifying a password. To make MySQL
more secure, you should set a password for all users and remove the row in
the mysql.user table that has Host='localhost' and
User=''.
You should also add a password for the root user. The following
example starts by removing the anonymous user that has all privileges,
then sets a root user password:
C:\> C:\mysql\bin\mysql mysql mysql> DELETE FROM user WHERE Host='localhost' AND User=''; mysql> QUIT C:\> C:\mysql\bin\mysqladmin reload C:\> C:\mysql\bin\mysqladmin -u root password your_password
After you've set the password, if you want to take down the mysqld
server, you can do so using this command:
C:\> mysqladmin --user=root --password=your_password shutdown
If you are using the old shareware version of MySQL Version
3.21 under Windows, the above command will fail with an error:
parse error near 'SET password'. The solution for
this is to download and upgrade to the latest MySQL version,
which is now freely available.
With the current MySQL versions you can easily add new users
and change privileges with GRANT and REVOKE commands.
See section 4.3.1 GRANT and REVOKE Syntax.
Here is a note about how to connect to get a secure connection to remote MySQL server with SSH (by David Carlson dcarlson@mplcomm.com):
SecureCRT from http://www.vandyke.com/.
Another option is f-secure from http://www.f-secure.com/. You
can also find some free ones on Google at
http://directory.google.com/Top/Computers/Security/Products_and_Tools/Cryptography/SSH/Clients/Windows/.
Host_Name = yourmysqlserver_URL_or_IP.
Set userid=your_userid to log in to your server (probably not the same
as your MySQL login/password.
local_port: 3306, remote_host: yourmysqlservername_or_ip, remote_port: 3306 )
or a local forward (Set port: 3306, host: localhost, remote port: 3306).
localhost
for the MySQL host server@-not yourmysqlservername.
You should now have an ODBC connection to MySQL, encrypted using SSH.
Beginning with MySQL Version 3.23.16, the mysqld-max
and mysql-max-nt servers in the MySQL distribution are
compiled with the -DUSE_SYMDIR option. This allows you to put a
database on different disk by adding a symbolic link to it
(in a manner similar to the way that symbolic links work on Unix).
On Windows, you make a symbolic link to a database by creating a file that contains the path to the destination directory and saving this in the `mysql_data' directory under the filename `database.sym'. Note that the symbolic link will be used only if the directory `mysql_data_dir\database' doesn't exist.
For example, if the MySQL data directory is `C:\mysql\data'
and you want to have database foo located at `D:\data\foo', you
should create the file `C:\mysql\data\foo.sym' that contains the
text D:\data\foo\. After that, all tables created in the database
foo will be created in `D:\data\foo'.
Note that because of the speed penalty you get when opening every table, we have not enabled this by default even if you have compiled MySQL with support for this. To enable symlinks you should put in your `my.cnf' or `my.ini' file the following entry:
[mysqld] --symbolic-links
In MySQL 4.0 --symbolic-links is enabled by default. If you
don't need this you can use the skip-symbolic-links option to
disable symlinks.
In your source files, you should include `windows.h' before you include `mysql.h':
#if defined(_WIN32) || defined(_WIN64) #include <windows.h> #endif #include <mysql.h>
You can either link your code with the dynamic `libmysql.lib' library, which is just a wrapper to load in `libmysql.dll' on demand, or link with the static `mysqlclient.lib' library.
Note that as the mysqlclient libraries are compiled as threaded libraries, you should also compile your code to be multi-threaded!
MySQL-Windows has by now proven itself to be very stable. This version of MySQL has the same features as the corresponding Unix version with the following exceptions:
mysqld for an extended time on Windows 95 if your server handles
many connections! Other versions of Windows don't suffer from this bug.
pread() and pwrite() calls to be
able to mix INSERT and SELECT. Currently we use mutexes
to emulate pread()/pwrite(). We will, in the long run,
replace the file level interface with a virtual interface so that we can
use the readfile()/writefile() interface on NT/2000/XP to
get more speed.
The current implementation limits the number of open files MySQL
can use to 1024, which means that you will not be able to run as many
concurrent threads on NT/2000/XP as on Unix.
mysqladmin kill will not work on a sleeping connection.
mysqladmin shutdown can't abort as long as there are sleeping
connections.
DROP DATABASE
mysqladmin shutdown.
LOAD
DATA INFILE or SELECT ... INTO OUTFILE, you must double the `\'
character:
mysql> LOAD DATA INFILE "C:\\tmp\\skr.txt" INTO TABLE skr; mysql> SELECT * INTO OUTFILE 'C:\\tmp\\skr.txt' FROM skr;Alternatively, use Unix style filenames with `/' characters:
mysql> LOAD DATA INFILE "C:/tmp/skr.txt" INTO TABLE skr; mysql> SELECT * INTO OUTFILE 'C:/tmp/skr.txt' FROM skr;
^Z / CHAR(24), Windows will think it
found end-of-file and will abort the program.
This is mainly a problem when you try to apply a binary log as follows:
mysqlbinlog binary-log-name | mysql --user=rootIf you get a problem applying the log and suspect it's because of an
^Z
/ CHAR(24) character you can use the following workaround:
mysqlbinlog binary-log-file --result-file=/tmp/bin.sql mysql --user=root -e "source /tmp/bin.sql"The later command can also be used to reliable read in any sql file that may contain binary data.
Can't open named pipe error
error 2017: can't open named pipe to host: . pipe...This is because the release version of MySQL uses named pipes on NT by default. You can avoid this error by using the
--host=localhost
option to the new MySQL clients or create an option file
`C:\my.cnf' that contains the following information:
[client] host = localhostStarting from 3.23.50, named pipes are only enabled if
mysqld is started
with --enable-named-pipe.
Access denied for user error
Access denied for user: 'some-user@unknown'
to database 'mysql' when accessing a MySQL server on the same
machine, this means that MySQL can't resolve your host name
properly.
To fix this, you should create a file `\windows\hosts' with the
following information:
127.0.0.1 localhost
ALTER TABLE
ALTER TABLE statement, the table is locked
from usage by other threads. This has to do with the fact that on Windows,
you can't delete a file that is in use by another threads. (In the future,
we may find some way to work around this problem.)
DROP TABLE on a table that is in use by a MERGE table will
not work on Windows because the MERGE handler does the table mapping
hidden from the upper layer of MySQL. Because Windows doesn't allow you
to drop files that are open, you first must flush all MERGE
tables (with FLUSH TABLES) or drop the MERGE table before
dropping the table. We will fix this at the same time we introduce
VIEWs.
DATA DIRECTORY and INDEX DIRECTORY directives in
CREATE TABLE is ignored on Windows, because Windows doesn't support
symbolic links.
Here are some open issues for anyone who might want to help us with the Windows release:
MYSQL.DLL server. This should include everything in
a standard MySQL server, except thread creation. This will make
MySQL much easier to use in applications that don't need a true
client/server and don't need to access the server from other hosts.
mysqld as a service with --install (on NT)
it would be nice if you could also add default options on the command-line.
For the moment, the workaround is to list the parameters in the
`C:\my.cnf' file instead.
mysqld from the task manager.
For the moment, you must use mysqladmin shutdown.
readline to Windows for use in the mysql command-line tool.
mysql,
mysqlshow, mysqladmin, and mysqldump) would be nice.
mysqladmin kill on Windows.
mysqld always starts in the "C" locale and not in the default locale.
We would like to have mysqld use the current locale for the sort order.
Other Windows-specific issues are described in the `README' file that comes with the MySQL-Windows distribution.
On Solaris, you may run into trouble even before you get the MySQL
distribution unpacked! Solaris tar can't handle long file names, so
you may see an error like this when you unpack MySQL:
x mysql-3.22.12-beta/bench/Results/ATIS-mysql_odbc-NT_4.0-cmp-db2,\ informix,ms-sql,mysql,oracle,solid,sybase, 0 bytes, 0 tape blocks tar: directory checksum error
In this case, you must use GNU tar (gtar) to unpack the
distribution. You can find a precompiled copy for Solaris at
http://www.mysql.com/downloads/os-solaris.html.
Sun native threads only work on Solaris 2.5 and higher. For Version 2.4 and earlier, MySQL will automatically use MIT-pthreads. See section 2.3.6 MIT-pthreads Notes.
If you get the following error from configure:
checking for restartable system calls... configure: error can not run test programs while cross compiling
This means that you have something wrong with your compiler installation! In this case you should upgrade your compiler to a newer version. You may also be able to solve this problem by inserting the following row into the `config.cache' file:
ac_cv_sys_restartable_syscalls=${ac_cv_sys_restartable_syscalls='no'}
If you are using Solaris on a SPARC, the recommended compiler is
gcc 2.95.2 or 3.2. You can find this at http://gcc.gnu.org/.
Note that egcs 1.1.1 and gcc 2.8.1 don't work reliably on
SPARC!
The recommended configure line when using gcc 2.95.2 is:
CC=gcc CFLAGS="-O3" \ CXX=gcc CXXFLAGS="-O3 -felide-constructors -fno-exceptions -fno-rtti" \ ./configure --prefix=/usr/local/mysql --with-low-memory --enable-assembler
If you have an UltraSPARC, you can get 4% more performance by adding "-mcpu=v8 -Wa,-xarch=v8plusa" to CFLAGS and CXXFLAGS.
If you have Sun's Forte 5.0 (or newer) compiler, you can
run configure like this:
CC=cc CFLAGS="-Xa -fast -native -xstrconst -mt" \ CXX=CC CXXFLAGS="-noex -mt" \ ./configure --prefix=/usr/local/mysql --enable-assembler
You can create a 64 bit binary using Sun's Forte compiler with the following compile flags:
CC=cc CFLAGS="-Xa -fast -native -xstrconst -mt -xarch=v9" \ CXX=CC CXXFLAGS="-noex -mt -xarch=v9" ASFLAGS="-xarch=v9" \ ./configure --prefix=/usr/local/mysql --enable-assembler
To create a 64bit Solaris binary using gcc, add -m64 to
CFLAGS and CXXFLAGS. Note that this only works with MySQL
4.0 and up - MySQL 3.23 does not include the required modifications to
support this.
In the MySQL benchmarks, we got a 4% speedup on an UltraSPARC when using Forte 5.0 in 32 bit mode compared to using gcc 3.2 with -mcpu flags.
If you create a 64 bit binary, it's 4 % slower than the 32 bit binary, but
mysqld can instead handle more treads and memory.
If you get a problem with fdatasync or sched_yield,
you can fix this by adding LIBS=-lrt to the configure line
The following paragraph is only relevant for older compilers than WorkShop 5.3:
You may also have to edit the configure script to change this line:
#if !defined(__STDC__) || __STDC__ != 1
to this:
#if !defined(__STDC__)
If you turn on __STDC__ with the -Xc option, the Sun compiler
can't compile with the Solaris `pthread.h' header file. This is a Sun
bug (broken compiler or broken include file).
If mysqld issues the error message shown here when you run it, you have
tried to compile MySQL with the Sun compiler without enabling the
multi-thread option (-mt):
libc internal error: _rmutex_unlock: rmutex not held
Add -mt to CFLAGS and CXXFLAGS and try again.
If you are using the SFW version of gcc (which comes with Solaris 8),
you must add `/opt/sfw/lib' to the environment variable
LD_LIBRARY_PATH before running configure.
If you are using the gcc available from sunfreeware.com, you may
have many problems. You should recompile gcc and GNU binutils on the
machine you will be running them from to avoid any problems.
If you get the following error when compiling MySQL with gcc,
it means that your gcc is not configured for your version of Solaris:
shell> gcc -O3 -g -O2 -DDBUG_OFF -o thr_alarm ... ./thr_alarm.c: In function `signal_hand': ./thr_alarm.c:556: too many arguments to function `sigwait'
The proper thing to do in this case is to get the newest version of
gcc and compile it with your current gcc compiler! At
least for Solaris 2.5, almost all binary versions of gcc have
old, unusable include files that will break all programs that use
threads (and possibly other programs)!
Solaris doesn't provide static versions of all system libraries
(libpthreads and libdl), so you can't compile MySQL
with --static. If you try to do so, you will get the error:
ld: fatal: library -ldl: not found or undefined reference to `dlopen' or cannot find -lrt
If too many processes try to connect very rapidly to mysqld, you will
see this error in the MySQL log:
Error in accept: Protocol error
You might try starting the server with the --set-variable back_log=50
option as a workaround for this. Please note that --set-variable is
deprecated since MySQL 4.0, just use --back_log=50 on its own.
See section 4.1.1 mysqld Command-line Options.
If you are linking your own MySQL client, you might get the following error when you try to execute it:
ld.so.1: ./my: fatal: libmysqlclient.so.#: open failed: No such file or directory
The problem can be avoided by one of the following methods:
-Lpath):
-Wl,r/full-path-to-libmysqlclient.so.
LD_RUN_PATH environment variable before running your client.
If you have problems with configure trying to link with -lz and
you don't have zlib installed, you have two options:
--with-named-z-libs=no.
If you are using gcc and have problems with loading user defined functions
(UDFs) into MySQL, try adding -lgcc to the link line for the
UDF.
If you would like MySQL to start automatically, you can copy `support-files/mysql.server' to `/etc/init.d' and create a symbolic link to it named `/etc/rc3.d/S99mysql.server'.
As Solaris doesn't support core files for setuid() applications,
you can't get a core file from mysqld if you are using the
--user option.
You can normally use a Solaris 2.6 binary on Solaris 2.7 and 2.8. Most of the Solaris 2.6 issues also apply for Solaris 2.7 and 2.8.
Note that MySQL Version 3.23.4 and above should be able to autodetect new versions of Solaris and enable workarounds for the following problems!
Solaris 2.7 / 2.8 has some bugs in the include files. You may see the
following error when you use gcc:
/usr/include/widec.h:42: warning: `getwc' redefined /usr/include/wchar.h:326: warning: this is the location of the previous definition
If this occurs, you can do the following to fix the problem:
Copy /usr/include/widec.h to
.../lib/gcc-lib/os/gcc-version/include and change line 41 from:
#if !defined(lint) && !defined(__lint) to #if !defined(lint) && !defined(__lint) && !defined(getwc)
Alternatively, you can edit `/usr/include/widec.h' directly. Either
way, after you make the fix, you should remove `config.cache' and run
configure again!
If you get errors like this when you run make, it's because
configure didn't detect the `curses.h' file (probably
because of the error in `/usr/include/widec.h'):
In file included from mysql.cc:50: /usr/include/term.h:1060: syntax error before `,' /usr/include/term.h:1081: syntax error before `;'
The solution to this is to do one of the following:
CFLAGS=-DHAVE_CURSES_H CXXFLAGS=-DHAVE_CURSES_H ./configure.
#define HAVE_TERM line from `config.h' file and
run make again.
If you get a problem that your linker can't find -lz when linking
your client program, the problem is probably that your `libz.so' file is
installed in `/usr/local/lib'. You can fix this by one of the
following methods:
LD_LIBRARY_PATH.
--with-named-z-libs=no option.
On Solaris 2.8 on x86, mysqld will dump core if you remove the
debug symbols using strip.
If you are using gcc or egcs on Solaris x86 and you
experience problems with core dumps under load, you should use the
following configure command:
CC=gcc CFLAGS="-O3 -fomit-frame-pointer -DHAVE_CURSES_H" \ CXX=gcc \ CXXFLAGS="-O3 -fomit-frame-pointer -felide-constructors -fno-exceptions \ -fno-rtti -DHAVE_CURSES_H" \ ./configure --prefix=/usr/local/mysql
This will avoid problems with the libstdc++ library and with C++
exceptions.
If this doesn't help, you should compile a debug version and run
it with a trace file or under gdb. See section E.1.3 Debugging mysqld under gdb.
This section provides information for the various BSD flavours, as well as specific versions within those.
FreeBSD 4.x is recommended for running MySQL since the thread package is much more integrated.
The easiest and therefore the preferred way to install is to use the mysql-server and mysql-client ports available on http://www.freebsd.org/.
Using these gives you:
It is recommended you use MIT-pthreads on FreeBSD 2.x and native threads on
Versions 3 and up. It is possible to run with native threads on some late
2.2.x versions but you may encounter problems shutting down mysqld.
Unfortunately, certain function calls on FreeBSD are not yet fully
thread-safe, most notably the gethostbyname() function, which is
used by MySQL to convert host names into IP addresses. Under certain
circumstances, the mysqld process will suddenly cause 100%
CPU load and will be unresponsive. If you encounter this, try to start
up MySQL using the --skip-name-resolve option.
Alternatively, you can link MySQL on FreeBSD 4.x against the LinuxThreads library, which avoids a few of the problems that the native FreeBSD thread implementation has. For a very good comparison of LinuxThreads vs. native threads have a look at Jeremy Zawodny's article "FreeBSD or Linux for your MySQL Server?" at http://jeremy.zawodny.com/blog/archives/000203.html.
The MySQL `Makefile's require GNU make (gmake) to work. If
you want to compile MySQL you need to install GNU make first.
Be sure to have your name resolver setup correct. Otherwise, you may
experience resolver delays or failures when connecting to mysqld.
Make sure that the localhost entry in the `/etc/hosts' file is
correct (otherwise, you will have problems connecting to the database). The
`/etc/hosts' file should start with a line:
127.0.0.1 localhost localhost.your.domain
The recommended way to compile and install MySQL on FreeBSD with gcc (2.95.2 and up) is:
CC=gcc CFLAGS="-O2 -fno-strength-reduce" \ CXX=gcc CXXFLAGS="-O2 -fno-rtti -fno-exceptions -felide-constructors \ -fno-strength-reduce" \ ./configure --prefix=/usr/local/mysql --enable-assembler gmake gmake install ./scripts/mysql_install_db cd /usr/local/mysql ./bin/mysqld_safe &
If you notice that configure will use MIT-pthreads, you should read
the MIT-pthreads notes. See section 2.3.6 MIT-pthreads Notes.
If you get an error from make install that it can't find
`/usr/include/pthreads', configure didn't detect that you need
MIT-pthreads. This is fixed by executing these commands:
shell> rm config.cache shell> ./configure --with-mit-threads
FreeBSD is also known to have a very low default file handle limit.
See section A.2.16 File Not Found. Uncomment the ulimit -n section in
safe_mysqld or raise the limits for the mysqld user in /etc/login.conf
(and rebuild it with cap_mkdb /etc/login.conf). Also be sure you set the
appropriate class for this user in the password file if you are not
using the default (use: chpass mysqld-user-name). See section 4.7.2 safe_mysqld, The Wrapper Around mysqld.
If you have a lot of memory you should consider rebuilding
the kernel to allow MySQL to take more than 512M of RAM.
Take a look at option MAXDSIZ in the LINT config
file for more info.
If you get problems with the current date in MySQL, setting the
TZ variable will probably help. See section F Environment Variables.
To get a secure and stable system you should only use FreeBSD kernels
that are marked -RELEASE.
To compile on NetBSD you need GNU make. Otherwise, the compile will
crash when make tries to run lint on C++ files.
On OpenBSD Version 2.5, you can compile MySQL with native threads with the following options:
CFLAGS=-pthread CXXFLAGS=-pthread ./configure --with-mit-threads=no
Our users have reported that OpenBSD 2.8 has a threading bug which causes problems with MySQL. The OpenBSD Developers have fixed the problem, but as of January 25th, 2001, it's only available in the ``-current'' branch. The symptoms of this threading bug are: slow response, high load, high CPU usage, and crashes.
If you get an error like Error in accept:: Bad file descriptor or
error 9 when trying to open tables or directories, the problem is probably
that you haven't allocated enough file descriptors for MySQL.
In this case try starting safe_mysqld as root with the following
options:
--user=mysql --open-files-limit=2048
If you get the following error when compiling MySQL, your
ulimit value for virtual memory is too low:
item_func.h: In method `Item_func_ge::Item_func_ge(const Item_func_ge &)': item_func.h:28: virtual memory exhausted make[2]: *** [item_func.o] Error 1
Try using ulimit -v 80000 and run make again. If this
doesn't work and you are using bash, try switching to csh
or sh; some BSDI users have reported problems with bash
and ulimit.
If you are using gcc, you may also use have to use the
--with-low-memory flag for configure to be able to compile
`sql_yacc.cc'.
If you get problems with the current date in MySQL, setting the
TZ variable will probably help. See section F Environment Variables.
Upgrade to BSD/OS Version 3.1. If that is not possible, install BSDIpatch M300-038.
Use the following command when configuring MySQL:
shell> env CXX=shlicc++ CC=shlicc2 \
./configure \
--prefix=/usr/local/mysql \
--localstatedir=/var/mysql \
--without-perl \
--with-unix-socket-path=/var/mysql/mysql.sock
The following is also known to work:
shell> env CC=gcc CXX=gcc CXXFLAGS=-O3 \
./configure \
--prefix=/usr/local/mysql \
--with-unix-socket-path=/var/mysql/mysql.sock
You can change the directory locations if you wish, or just use the defaults by not specifying any locations.
If you have problems with performance under heavy load, try using the
--skip-thread-priority option to mysqld! This will run
all threads with the same priority; on BSDI Version 3.1, this gives better
performance (at least until BSDI fixes their thread scheduler).
If you get the error virtual memory exhausted while compiling,
you should try using ulimit -v 80000 and run make again.
If this doesn't work and you are using bash, try switching to
csh or sh; some BSDI users have reported problems with
bash and ulimit.
BSDI Version 4.x has some thread-related bugs. If you want to use MySQL on this, you should install all thread-related patches. At least M400-023 should be installed.
On some BSDI Version 4.x systems, you may get problems with shared libraries.
The symptom is that you can't execute any client programs, for example,
mysqladmin. In this case you need to reconfigure not to use
shared libraries with the --disable-shared option to configure.
Some customers have had problems on BSDI 4.0.1 that the mysqld
binary after a while can't open tables. This is because some
library/system related bug causes mysqld to change current
directory without asking for this!
The fix is to either upgrade to 3.23.34 or after running configure
remove the line #define HAVE_REALPATH from config.h
before running make.
Note that the above means that you can't symbolic link a database directories to another database directory or symbolic link a table to another database on BSDI! (Making a symbolic link to another disk is okay).
MySQL should work without any problems on Mac OS X 10.x (Darwin). You don't need the pthread patches for this OS!
This also applies to Mac OS X 10.x Server. Compiling for the Server platform is the same as for the client version of Mac OS X. However please note that MySQL comes preinstalled on the Server!
Our binary for Mac OS X is compiled on Darwin 6.3 with the following configure line:
CC=gcc CFLAGS="-O3 -fno-omit-frame-pointer" CXX=gcc \ CXXFLAGS="-O3 -fno-omit-frame-pointer -felide-constructors \ -fno-exceptions -fno-rtti" ./configure --prefix=/usr/local/mysql \ --with-extra-charsets=complex --enable-thread-safe-client \ --enable-local-infile --disable-shared
See section 2.1.3 Installing MySQL on Mac OS X.
Before trying to configure MySQL on Mac OS X Server 1.2 (aka Rhapsody) you must first install the pthread package from http://www.prnet.de/RegEx/mysql.html.
See section 2.1.3 Installing MySQL on Mac OS X.
Some of the binary distributions of MySQL for HP-UX are distributed as an HP depot file and as a tar file. To use the depot file you must be running at least HP-UX 10.x to have access to HP's software depot tools.
The HP version of MySQL was compiled on an HP 9000/8xx server under HP-UX 10.20, and uses MIT-pthreads. It is known to work well under this configuration. MySQL Version 3.22.26 and newer can also be built with HP's native thread package.
Other configurations that may work:
The following configurations almost definitely won't work:
To install the distribution, use one of the commands here, where
/path/to/depot is the full pathname of the depot file:
shell> /usr/sbin/swinstall -s /path/to/depot mysql.full
shell> /usr/sbin/swinstall -s /path/to/depot mysql.server
shell> /usr/sbin/swinstall -s /path/to/depot mysql.client
shell> /usr/sbin/swinstall -s /path/to/depot mysql.developer
The depot places binaries and libraries in `/opt/mysql' and data in
`/var/opt/mysql'. The depot also creates the appropriate entries in
`/etc/init.d' and `/etc/rc2.d' to start the server automatically
at boot time. Obviously, this entails being root to install.
To install the HP-UX tar.gz distribution, you must have a copy of GNU
tar.
There are a couple of small problems when compiling MySQL on
HP-UX. We recommend that you use gcc instead of the HP-UX native
compiler, because gcc produces better code!
We recommend using gcc 2.95 on HP-UX. Don't use high optimisation flags (like -O6) as this may not be safe on HP-UX.
The following configure line should work with gcc 2.95:
CFLAGS="-I/opt/dce/include -fpic" \ CXXFLAGS="-I/opt/dce/include -felide-constructors -fno-exceptions \ -fno-rtti" CXX=gcc ./configure --with-pthread \ --with-named-thread-libs='-ldce' --prefix=/usr/local/mysql --disable-shared
The following configure line should work with gcc 3.1:
CFLAGS="-DHPUX -I/opt/dce/include -O3 -fPIC" CXX=gcc \ CXXFLAGS="-DHPUX -I/opt/dce/include -felide-constructors -fno-exceptions \ -fno-rtti -O3 -fPIC" ./configure --prefix=/usr/local/mysql \ --with-extra-charsets=complex --enable-thread-safe-client \ --enable-local-infile --with-pthread \ --with-named-thread-libs=-ldce --with-lib-ccflags=-fPIC --disable-shared
For HP-UX Version 11.x we recommend MySQL Version 3.23.15 or later.
Because of some critical bugs in the standard HP-UX libraries, you should install the following patches before trying to run MySQL on HP-UX 11.0:
PHKL_22840 Streams cumulative PHNE_22397 ARPA cumulative
This will solve the problem of getting EWOULDBLOCK from recv()
and EBADF from accept() in threaded applications.
If you are using gcc 2.95.1 on an unpatched HP-UX 11.x system,
you will get the error:
In file included from /usr/include/unistd.h:11,
from ../include/global.h:125,
from mysql_priv.h:15,
from item.cc:19:
/usr/include/sys/unistd.h:184: declaration of C function ...
/usr/include/sys/pthread.h:440: previous declaration ...
In file included from item.h:306,
from mysql_priv.h:158,
from item.cc:19:
The problem is that HP-UX doesn't define pthreads_atfork() consistently.
It has conflicting prototypes in
`/usr/include/sys/unistd.h':184 and
`/usr/include/sys/pthread.h':440 (details below).
One solution is to copy `/usr/include/sys/unistd.h' into `mysql/include' and edit `unistd.h' and change it to match the definition in `pthread.h'. Here's the diff:
183,184c183,184 < extern int pthread_atfork(void (*prepare)(), void (*parent)(), < void (*child)()); --- > extern int pthread_atfork(void (*prepare)(void), void (*parent)(void), > void (*child)(void));
After this, the following configure line should work:
CFLAGS="-fomit-frame-pointer -O3 -fpic" CXX=gcc \ CXXFLAGS="-felide-constructors -fno-exceptions -fno-rtti -O3" \ ./configure --prefix=/usr/local/mysql --disable-shared
If you are using MySQL 4.0.5 with the HP-UX compiler you can use: (tested with cc B.11.11.04):
CC=cc CXX=aCC CFLAGS=+DD64 CXXFLAGS=+DD64 ./configure --with-extra-character-set=complex
You can ignore any errors of the following type:
aCC: warning 901: unknown option: `-3': use +help for online documentation
If you get the following error from configure
checking for cc option to accept ANSI C... no configure: error: MySQL requires a ANSI C compiler (and a C++ compiler). Try gcc. See the Installation chapter in the Reference Manual.
Check that you don't have the path to the K&R compiler before the path to the HP-UX C and C++ compiler.
Another reason for not beeing able to compile is that you didn't define
the +DD64 flags above.
Automatic detection of xlC is missing from Autoconf, so a
configure command something like this is needed when compiling
MySQL (This example uses the IBM compiler):
export CC="xlc_r -ma -O3 -qstrict -qoptimize=3 -qmaxmem=8192 " export CXX="xlC_r -ma -O3 -qstrict -qoptimize=3 -qmaxmem=8192" export CFLAGS="-I /usr/local/include" export LDFLAGS="-L /usr/local/lib" export CPPFLAGS=$CFLAGS export CXXFLAGS=$CFLAGS ./configure --prefix=/usr/local \ --localstatedir=/var/mysql \ --sysconfdir=/etc/mysql \ --sbindir='/usr/local/bin' \ --libexecdir='/usr/local/bin' \ --enable-thread-safe-client \ --enable-large-files
Above are the options used to compile the MySQL distribution that can be found at http://www-frec.bull.com/.
If you change the -O3 to -O2 in the above configure line,
you must also remove the -qstrict option (this is a limitation in
the IBM C compiler).
If you are using gcc or egcs to compile MySQL, you
must use the -fno-exceptions flag, as the exception
handling in gcc/egcs is not thread-safe! (This is tested with
egcs 1.1.) There are also some known problems with IBM's assembler,
which may cause it to generate bad code when used with gcc.
We recommend the following configure line with egcs and
gcc 2.95 on AIX:
CC="gcc -pipe -mcpu=power -Wa,-many" \ CXX="gcc -pipe -mcpu=power -Wa,-many" \ CXXFLAGS="-felide-constructors -fno-exceptions -fno-rtti" \ ./configure --prefix=/usr/local/mysql --with-low-memory
The -Wa,-many is necessary for the compile to be successful. IBM is
aware of this problem but is in to hurry to fix it because of the workaround
available. We don't know if the -fno-exceptions is required with
gcc 2.95, but as MySQL doesn't use exceptions and the above
option generates faster code, we recommend that you should always use this
option with egcs / gcc.
If you get a problem with assembler code try changing the -mcpu=xxx to match your CPU. Typically power2, power, or powerpc may need to be used, alternatively you might need to use 604 or 604e. I'm not positive but I would think using "power" would likely be safe most of the time, even on a power2 machine.
If you don't know what your CPU is then do a "uname -m", this will give you back a string that looks like "000514676700", with a format of xxyyyyyymmss where xx and ss are always 0's, yyyyyy is a unique system id and mm is the id of the CPU Planar. A chart of these values can be found at http://publib.boulder.ibm.com/doc_link/en_US/a_doc_lib/cmds/aixcmds5/uname.htm. This will give you a machine type and a machine model you can use to determine what type of CPU you have.
If you have problems with signals (MySQL dies unexpectedly under high load) you may have found an OS bug with threads and signals. In this case you can tell MySQL not to use signals by configuring with:
shell> CFLAGS=-DDONT_USE_THR_ALARM CXX=gcc \
CXXFLAGS="-felide-constructors -fno-exceptions -fno-rtti \
-DDONT_USE_THR_ALARM" \
./configure --prefix=/usr/local/mysql --with-debug --with-low-memory
This doesn't affect the performance of MySQL, but has the side
effect that you can't kill clients that are ``sleeping'' on a connection with
mysqladmin kill or mysqladmin shutdown. Instead, the client
will die when it issues its next command.
On some versions of AIX, linking with libbind.a makes
getservbyname core dump. This is an AIX bug and should be reported
to IBM.
For AIX 4.2.1 and gcc you have to do the following changes.
After configuring, edit `config.h' and `include/my_config.h' and change the line that says
#define HAVE_SNPRINTF 1
to
#undef HAVE_SNPRINTF
And finally, in `mysqld.cc' you need to add a prototype for initgoups.
#ifdef _AIX41 extern "C" int initgroups(const char *,int); #endif
If you need to allocate a lot of memory to the mysqld process, it's not
enough to just set 'ulimit -d unlimited'. You may also have to set
in mysqld_safe something like:
export LDR_CNTRL='MAXDATA=0x80000000'
You can find more about using a lot of memory at: http://publib16.boulder.ibm.com/pseries/en_US/aixprggd/genprogc/lrg_prg_support.htm.
On SunOS 4, MIT-pthreads is needed to compile MySQL, which in turn
means you will need GNU make.
Some SunOS 4 systems have problems with dynamic libraries and libtool.
You can use the following configure line to avoid this problem:
shell> ./configure --disable-shared --with-mysqld-ldflags=-all-static
When compiling readline, you may get warnings about duplicate defines.
These may be ignored.
When compiling mysqld, there will be some implicit declaration
of function warnings. These may be ignored.
If you are using egcs 1.1.2 on Digital Unix, you should upgrade to gcc 2.95.2, as egcs on DEC has some serious bugs!
When compiling threaded programs under Digital Unix, the documentation
recommends using the -pthread option for cc and cxx and
the libraries -lmach -lexc (in addition to -lpthread). You
should run configure something like this:
CC="cc -pthread" CXX="cxx -pthread -O" \ ./configure --with-named-thread-libs="-lpthread -lmach -lexc -lc"
When compiling mysqld, you may see a couple of warnings like this:
mysqld.cc: In function void handle_connections()': mysqld.cc:626: passing long unsigned int *' as argument 3 of accept(int,sockadddr *, int *)'
You can safely ignore these warnings. They occur because configure
can detect only errors, not warnings.
If you start the server directly from the command-line, you may have problems
with it dying when you log out. (When you log out, your outstanding processes
receive a SIGHUP signal.) If so, try starting the server like this:
shell> nohup mysqld [options] &
nohup causes the command following it to ignore any SIGHUP
signal sent from the terminal. Alternatively, start the server by running
safe_mysqld, which invokes mysqld using nohup for you.
See section 4.7.2 safe_mysqld, The Wrapper Around mysqld.
If you get a problem when compiling mysys/get_opt.c, just remove the line #define _NO_PROTO from the start of that file!
If you are using Compaq's CC compiler, the following configure line should work:
CC="cc -pthread" CFLAGS="-O4 -ansi_alias -ansi_args -fast -inline speed all -arch host" CXX="cxx -pthread" CXXFLAGS="-O4 -ansi_alias -ansi_args -fast -inline speed all -arch host \ -noexceptions -nortti" export CC CFLAGS CXX CXXFLAGS ./configure \ --prefix=/usr/local/mysql \ --with-low-memory \ --enable-large-files \ --enable-shared=yes \ --with-named-thread-libs="-lpthread -lmach -lexc -lc" gnumake
If you get a problem with libtool, when compiling with shared libraries
as above, when linking mysql, you should be able to get around
this by issuing:
cd mysql /bin/sh ../libtool --mode=link cxx -pthread -O3 -DDBUG_OFF \ -O4 -ansi_alias -ansi_args -fast -inline speed \ -speculate all \ -arch host -DUNDEF_HAVE_GETHOSTBYNAME_R \ -o mysql mysql.o readline.o sql_string.o completion_hash.o \ ../readline/libreadline.a -lcurses \ ../libmysql/.libs/libmysqlclient.so -lm cd .. gnumake gnumake install scripts/mysql_install_db
If you have problems compiling and have DEC CC and gcc
installed, try running configure like this:
CC=cc CFLAGS=-O CXX=gcc CXXFLAGS=-O3 \ ./configure --prefix=/usr/local/mysql
If you get problems with the `c_asm.h' file, you can create and use a 'dummy' `c_asm.h' file with:
touch include/c_asm.h CC=gcc CFLAGS=-I./include \ CXX=gcc CXXFLAGS=-O3 \ ./configure --prefix=/usr/local/mysql
Note that the following problems with the ld program can be fixed
by downloading the latest DEC (Compaq) patch kit from:
http://ftp.support.compaq.com/public/unix/.
On OSF/1 V4.0D and compiler "DEC C V5.6-071 on Digital Unix V4.0 (Rev. 878)"
the compiler had some strange behaviour (undefined asm symbols).
/bin/ld also appears to be broken (problems with _exit
undefined errors occuring while linking mysqld). On this system, we
have managed to compile MySQL with the following configure
line, after replacing /bin/ld with the version from OSF 4.0C:
CC=gcc CXX=gcc CXXFLAGS=-O3 ./configure --prefix=/usr/local/mysql
With the Digital compiler "C++ V6.1-029", the following should work:
CC=cc -pthread
CFLAGS=-O4 -ansi_alias -ansi_args -fast -inline speed -speculate all \
-arch host
CXX=cxx -pthread
CXXFLAGS=-O4 -ansi_alias -ansi_args -fast -inline speed -speculate all \
-arch host -noexceptions -nortti
export CC CFLAGS CXX CXXFLAGS
./configure --prefix=/usr/mysql/mysql --with-mysqld-ldflags=-all-static \
--disable-shared --with-named-thread-libs="-lmach -lexc -lc"
In some versions of OSF/1, the alloca() function is broken. Fix
this by removing the line in `config.h' that defines 'HAVE_ALLOCA'.
The alloca() function also may have an incorrect prototype in
/usr/include/alloca.h. This warning resulting from this can be ignored.
configure will use the following thread libraries automatically:
--with-named-thread-libs="-lpthread -lmach -lexc -lc".
When using gcc, you can also try running configure like this:
shell> CFLAGS=-D_PTHREAD_USE_D4 CXX=gcc CXXFLAGS=-O3 ./configure ...
If you have problems with signals (MySQL dies unexpectedly under high load), you may have found an OS bug with threads and signals. In this case you can tell MySQL not to use signals by configuring with:
shell> CFLAGS=-DDONT_USE_THR_ALARM \
CXXFLAGS=-DDONT_USE_THR_ALARM \
./configure ...
This doesn't affect the performance of MySQL, but has the side
effect that you can't kill clients that are ``sleeping'' on a connection with
mysqladmin kill or mysqladmin shutdown. Instead, the client
will die when it issues its next command.
With gcc 2.95.2, you will probably run into the following compile error:
sql_acl.cc:1456: Internal compiler error in `scan_region', at except.c:2566 Please submit a full bug report.
To fix this you should change to the sql directory and do a ``cut
and paste'' of the last gcc line, but change -O3 to
-O0 (or add -O0 immediately after gcc if you don't
have any -O option on your compile line). After this is done you
can just change back to the top-level directly and run make
again.
If you are using Irix Version 6.5.3 or newer mysqld will only be able to
create threads if you run it as a user with CAP_SCHED_MGT
privileges (like root) or give the mysqld server this privilege
with the following shell command:
shell> chcap "CAP_SCHED_MGT+epi" /opt/mysql/libexec/mysqld
You may have to undefine some things in `config.h' after running
configure and before compiling.
In some Irix implementations, the alloca() function is broken. If the
mysqld server dies on some SELECT statements, remove the lines
from `config.h' that define HAVE_ALLOC and HAVE_ALLOCA_H.
If mysqladmin create doesn't work, remove the line from `config.h'
that defines HAVE_READDIR_R. You may have to remove the
HAVE_TERM_H line as well.
SGI recommends that you install all of the patches on this page as a set: http://support.sgi.com/surfzone/patches/patchset/6.2_indigo.rps.html
At the very minimum, you should install the latest kernel rollup, the
latest rld rollup, and the latest libc rollup.
You definitely need all the POSIX patches on this page, for pthreads support:
http://support.sgi.com/surfzone/patches/patchset/6.2_posix.rps.html
If you get the something like the following error when compiling `mysql.cc':
"/usr/include/curses.h", line 82: error(1084): invalid combination of type
Type the following in the top-level directory of your MySQL source tree:
shell> extra/replace bool curses_bool < /usr/include/curses.h \ > include/curses.h shell> make
There have also been reports of scheduling problems. If only one thread is running, things go slow. Avoid this by starting another client. This may lead to a 2-to-10-fold increase in execution speed thereafter for the other thread. This is a poorly understood problem with Irix threads; you may have to improvise to find solutions until this can be fixed.
If you are compiling with gcc, you can use the following
configure command:
CC=gcc CXX=gcc CXXFLAGS=-O3 \ ./configure --prefix=/usr/local/mysql --enable-thread-safe-client \ --with-named-thread-libs=-lpthread
On Irix 6.5.11 with native Irix C and C++ compilers ver. 7.3.1.2, the following is reported to work
CC=cc CXX=CC CFLAGS='-O3 -n32 -TARG:platform=IP22 -I/usr/local/include \ -L/usr/local/lib' CXXFLAGS='-O3 -n32 -TARG:platform=IP22 \ -I/usr/local/include -L/usr/local/lib' ./configure \ --prefix=/usr/local/mysql --with-innodb --with-berkeley-db \ --with-libwrap=/usr/local \ --with-named-curses-libs=/usr/local/lib/libncurses.a
The current port is tested only on ``sco3.2v5.0.5'', ``sco3.2v5.0.6'' and ``sco3.2v5.0.7'' systems. There has also been a lot of progress on a port to ``sco 3.2v4.2''.
For the moment the recommended compiler on OpenServer is gcc 2.95.2. With this you should be able to compile MySQL with just:
CC=gcc CXX=gcc ./configure ... (options)
./configure in the `threads/src' directory and select
the SCO OpenServer option. This command copies `Makefile.SCO5' to
`Makefile'.
make.
cd to the `thread/src' directory, and run make
install.
make when making MySQL.
safe_mysqld as root, you probably will get only the
default 110 open files per process. mysqld will write a note about this
in the log file.
configure command should work:
shell> ./configure --prefix=/usr/local/mysql --disable-shared
configure command should work:
shell> CFLAGS="-D_XOPEN_XPG4" CXX=gcc CXXFLAGS="-D_XOPEN_XPG4" \
./configure \
--prefix=/usr/local/mysql \
--with-named-thread-libs="-lgthreads -lsocket -lgen -lgthreads" \
--with-named-curses-libs="-lcurses"
You may get some problems with some include files. In this case, you can
find new SCO-specific include files at
http://www.mysql.com/Downloads/SCO/SCO-3.2v4.2-includes.tar.gz.
You should unpack this file in the `include' directory of your
MySQL source tree.
SCO development notes:
mysqld
with -lgthreads -lsocket -lgthreads.
malloc. If you encounter problems with memory usage,
make sure that `gmalloc.o' is included in `libgthreads.a' and
`libgthreads.so'.
read(),
write(), getmsg(), connect(), accept(),
select(), and wait().
If you want to install DBI on SCO, you have to edit the `Makefile' in DBI-xxx and each subdirectory.
Note that the following assumes gcc 2.95.2 or newer:
OLD: NEW: CC = cc CC = gcc CCCDLFLAGS = -KPIC -W1,-Bexport CCCDLFLAGS = -fpic CCDLFLAGS = -wl,-Bexport CCDLFLAGS = LD = ld LD = gcc -G -fpic LDDLFLAGS = -G -L/usr/local/lib LDDLFLAGS = -L/usr/local/lib LDFLAGS = -belf -L/usr/local/lib LDFLAGS = -L/usr/local/lib LD = ld LD = gcc -G -fpic OPTIMISE = -Od OPTIMISE = -O1 OLD: CCCFLAGS = -belf -dy -w0 -U M_XENIX -DPERL_SCO5 -I/usr/local/include NEW: CCFLAGS = -U M_XENIX -DPERL_SCO5 -I/usr/local/include
This is because the Perl dynaloader will not load the DBI modules
if they were compiled with icc or cc.
Perl works best when compiled with cc.
You must use a version of MySQL at least as recent as Version 3.22.13 and of UnixWare 7.1.0 because these version fixes some portability and OS problems under UnixWare.
We have been able to compile MySQL with the following configure
command on UnixWare Version 7.1.x:
CC=cc CXX=CC ./configure --prefix=/usr/local/mysql
If you want to use gcc, you must use gcc 2.95.2 or newer.
CC=gcc CXX=g++ ./configure --prefix=/usr/local/mysql
MySQL uses quite a few open files. Because of this, you should add something like the following to your `CONFIG.SYS' file:
SET EMXOPT=-c -n -h1024
If you don't do this, you will probably run into the following error:
File 'xxxx' not found (Errcode: 24)
When using MySQL with OS/2 Warp 3, FixPack 29 or above is required. With OS/2 Warp 4, FixPack 4 or above is required. This is a requirement of the Pthreads library. MySQL must be installed in a partition that supports long filenames such as HPFS, FAT32, etc.
The `INSTALL.CMD' script must be run from OS/2's own `CMD.EXE' and may not work with replacement shells such as `4OS2.EXE'.
The `scripts/mysql-install-db' script has been renamed. It is now called `install.cmd' and is a REXX script, which will set up the default MySQL security settings and create the WorkPlace Shell icons for MySQL.
Dynamic module support is compiled in but not fully tested. Dynamic modules should be compiled using the Pthreads run-time library.
gcc -Zdll -Zmt -Zcrtdll=pthrdrtl -I../include -I../regex -I.. \
-o example udf_example.cc -L../lib -lmysqlclient udf_example.def
mv example.dll example.udf
Note: Due to limitations in OS/2, UDF module name stems must not
exceed 8 characters. Modules are stored in the `/mysql2/udf'
directory; the safe-mysqld.cmd script will put this directory in
the BEGINLIBPATH environment variable. When using UDF modules,
specified extensions are ignored@-it is assumed to be `.udf'.
For example, in Unix, the shared module might be named `example.so'
and you would load a function from it like this:
mysql> CREATE FUNCTION metaphon RETURNS STRING SONAME "example.so";
In OS/2, the module would be named `example.udf', but you would not specify the module extension:
mysql> CREATE FUNCTION metaphon RETURNS STRING SONAME "example";
Porting MySQL to NetWare was an effort spearheaded by
Novell. Novell customers will be pleased to note that NetWare 6.5
will ship with bundled MySQL binaries, complete with an automatic
commercial use license for all servers running that version of NetWare.
See section 2.1.4 Installing MySQL on NetWare.
MySQL for NetWare is compiled using a combination of
Metrowerks Codewarrior for NetWare and special cross-compilation
versions of the GNU autotools. Check back here in the future for more
information on building and optimising MySQL for NetWare.
We are really interested in getting MySQL to work on BeOS, but unfortunately we don't have any person who knows BeOS or has time to do a port.
We are interested in finding someone to do a port, and we will help them with any technical questions they may have while doing the port.
We have previously talked with some BeOS developers that have said that MySQL is 80% ported to BeOS, but we haven't heard from them in a while.
Perl support for MySQL is provided by means of the
DBI/DBD client interface. See section 8.5 MySQL Perl API. The Perl
DBD/DBI client code requires Perl Version 5.004 or later. The
interface will not work if you have an older version of Perl.
MySQL Perl support also requires that you've installed MySQL client programming support. If you installed MySQL from RPM files, client programs are in the client RPM, but client programming support is in the developer RPM. Make sure you've installed the latter RPM.
As of Version 3.22.8, Perl support is distributed separately from the main MySQL distribution. If you want to install Perl support, the files you will need can be obtained from http://www.mysql.com/downloads/api-dbi.html.
The Perl distributions are provided as compressed tar archives and
have names like `MODULE-VERSION.tar.gz', where MODULE is the
module name and VERSION is the version number. You should get the
Data-Dumper, DBI, and DBD-mysql distributions
and install them in that order. The installation procedure is shown here.
The example shown is for the Data-Dumper module, but the procedure is
the same for all three distributions:
shell> gunzip < Data-Dumper-VERSION.tar.gz | tar xvf -This command creates a directory named `Data-Dumper-VERSION'.
shell> cd Data-Dumper-VERSION
shell> perl Makefile.PL shell> make shell> make test shell> make install
The make test command is important because it verifies that the
module is working. Note that when you run that command during the
DBD-mysql installation to exercise the interface code, the
MySQL server must be running or the test will fail.
It is a good idea to rebuild and reinstall the DBD-mysql
distribution whenever you install a new release of MySQL,
particularly if you notice symptoms such as all your DBI scripts
dumping core after you upgrade MySQL.
If you don't have the right to install Perl modules in the system directory or if you to install local Perl modules, the following reference may help you:
http://www.iserver.com/support/contrib/perl5/modules.html
Look under the heading
Installing New Modules that Require Locally Installed Modules.
To install the MySQL DBD module with ActiveState Perl on
Windows, you should do the following:
set HTTP_proxy=my.proxy.com:3128
C:\> c:\perl\bin\ppm.pl
DBI:
ppm> install DBI
install \ ftp://ftp.de.uu.net/pub/CPAN/authors/id/JWIED/DBD-mysql-1.2212.x86.ppd
The above should work at least with ActiveState Perl Version 5.6.
If you can't get the above to work, you should instead install the
MyODBC driver and connect to MySQL server through
ODBC:
use DBI;
$dbh= DBI->connect("DBI:ODBC:$dsn","$user","$password") ||
die "Got error $DBI::errstr when connecting to $dsn\n";
The MySQL Perl distribution contains DBI,
DBD:MySQL and DBD:ODBC.
C: so that you get a `C:\PERL' directory.
perl works by executing perl -v in a DOS shell.
DBI/DBD InterfaceIf Perl reports that it can't find the `../mysql/mysql.so' module, then the problem is probably that Perl can't locate the shared library `libmysqlclient.so'.
You can fix this by any of the following methods:
DBD-mysql distribution with perl
Makefile.PL -static -config rather than perl Makefile.PL.
LD_RUN_PATH environment variable.
If you get the following errors from DBD-mysql,
you are probably using gcc (or using an old binary compiled with
gcc):
/usr/bin/perl: can't resolve symbol '__moddi3' /usr/bin/perl: can't resolve symbol '__divdi3'
Add -L/usr/lib/gcc-lib/... -lgcc to the link command when the
`mysql.so' library gets built (check the output from make for
`mysql.so' when you compile the Perl client). The -L option
should specify the pathname of the directory where `libgcc.a' is located
on your system.
Another cause of this problem may be that Perl and MySQL aren't both
compiled with gcc. In this case, you can solve the mismatch by
compiling both with gcc.
If you get the following error from DBD-mysql
when you run the tests:
t/00base............install_driver(mysql) failed: Can't load '../blib/arch/auto/DBD/mysql/mysql.so' for module DBD::mysql: ../blib/arch/auto/DBD/mysql/mysql.so: undefined symbol: uncompress at /usr/lib/perl5/5.00503/i586-linux/DynaLoader.pm line 169.
it means that you need to include the compression library, -lz, to the link line. This can be doing the following change in the file `lib/DBD/mysql/Install.pm':
$sysliblist .= " -lm"; to $sysliblist .= " -lm -lz";
After this, you must run 'make realclean' and then proceed with the installation from the beginning.
If you want to use the Perl module on a system that doesn't support
dynamic linking (like SCO) you can generate a static version of
Perl that includes DBI and DBD-mysql. The way this works
is that you generate a version of Perl with the DBI code linked
in and install it on top of your current Perl. Then you use that to
build a version of Perl that additionally has the DBD code linked
in, and install that.
On SCO, you must have the following environment variables set:
shell> LD_LIBRARY_PATH=/lib:/usr/lib:/usr/local/lib:/usr/progressive/lib or shell> LD_LIBRARY_PATH=/usr/lib:/lib:/usr/local/lib:/usr/ccs/lib:\ /usr/progressive/lib:/usr/skunk/lib shell> LIBPATH=/usr/lib:/lib:/usr/local/lib:/usr/ccs/lib:\ /usr/progressive/lib:/usr/skunk/lib shell> MANPATH=scohelp:/usr/man:/usr/local1/man:/usr/local/man:\ /usr/skunk/man:
First, create a Perl that includes a statically linked DBI by running
these commands in the directory where your DBI distribution is
located:
shell> perl Makefile.PL -static -config shell> make shell> make install shell> make perl
Then you must install the new Perl. The output of make perl will
indicate the exact make command you will need to execute to perform
the installation. On SCO, this is
make -f Makefile.aperl inst_perl MAP_TARGET=perl.
Next, use the just-created Perl to create another Perl that also includes a
statically-linked DBD::mysql by running these commands in the
directory where your DBD-mysql distribution is located:
shell> perl Makefile.PL -static -config shell> make shell> make install shell> make perl
Finally, you should install this new Perl. Again, the output of make
perl indicates the command to use.
This chapter provides a tutorial introduction to MySQL by showing
how to use the mysql client program to create and use a simple
database. mysql (sometimes referred to as the ``terminal monitor'' or
just ``monitor'') is an interactive program that allows you to connect to a
MySQL server, run queries, and view the results. mysql may
also be used in batch mode: you place your queries in a file beforehand, then
tell mysql to execute the contents of the file. Both ways of using
mysql are covered here.
To see a list of options provided by mysql, invoke it with
the --help option:
shell> mysql --help
This chapter assumes that mysql is installed on your machine and that
a MySQL server is available to which you can connect. If this is
not true, contact your MySQL administrator. (If you are the
administrator, you will need to consult other sections of this manual.)
This chapter describes the entire process of setting up and using a database. If you are interested only in accessing an already-existing database, you may want to skip over the sections that describe how to create the database and the tables it contains.
Because this chapter is tutorial in nature, many details are necessarily left out. Consult the relevant sections of the manual for more information on the topics covered here.
To connect to the server, you'll usually need to provide a MySQL
user name when you invoke mysql and, most likely, a password. If the
server runs on a machine other than the one where you log in, you'll also
need to specify a hostname. Contact your administrator to find out what
connection parameters you should use to connect (that is, what host, user name,
and password to use). Once you know the proper parameters, you should be
able to connect like this:
shell> mysql -h host -u user -p Enter password: ********
The ******** represents your password; enter it when mysql
displays the Enter password: prompt.
If that works, you should see some introductory information followed by a
mysql> prompt:
shell> mysql -h host -u user -p Enter password: ******** Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 459 to server version: 3.22.20a-log Type 'help' for help. mysql>
The prompt tells you that mysql is ready for you to enter commands.
Some MySQL installations allow users to connect as the anonymous
(unnamed) user to the server running on the local host. If this is the case
on your machine, you should be able to connect to that server by invoking
mysql without any options:
shell> mysql
After you have connected successfully, you can disconnect any time by typing
QUIT at the mysql> prompt:
mysql> QUIT Bye
You can also disconnect by pressing Control-D.
Most examples in the following sections assume you are connected to the
server. They indicate this by the mysql> prompt.
Make sure you are connected to the server, as discussed in the previous
section. Doing so will not in itself select any database to work with, but
that's okay. At this point, it's more important to find out a little about
how to issue queries than to jump right in creating tables, loading data
into them, and retrieving data from them. This section describes the basic
principles of entering commands, using several queries you can try out to
familiarise yourself with how mysql works.
Here's a simple command that asks the server to tell you its version number
and the current date. Type it in as shown here following the mysql>
prompt and press Enter:
mysql> SELECT VERSION(), CURRENT_DATE; +--------------+--------------+ | VERSION() | CURRENT_DATE | +--------------+--------------+ | 3.22.20a-log | 1999-03-19 | +--------------+--------------+ 1 row in set (0.01 sec) mysql>
This query illustrates several things about mysql:
QUIT,
mentioned earlier, is one of them. We'll get to others later.)
mysql sends it to the server for execution
and displays the results, then prints another mysql> to indicate
that it is ready for another command.
mysql displays query output as a table (rows and columns). The first
row contains labels for the columns. The rows following are the query
results. Normally, column labels are the names of the columns you fetch from
database tables. If you're retrieving the value of an expression rather than
a table column (as in the example just shown), mysql labels the column
using the expression itself.
mysql shows how many rows were returned and how long the query took
to execute, which gives you a rough idea of server performance. These values
are imprecise because they represent wall clock time (not CPU or machine
time), and because they are affected by factors such as server load and
network latency. (For brevity, the ``rows in set'' line is not shown in
the remaining examples in this chapter.)
Keywords may be entered in any lettercase. The following queries are equivalent:
mysql> SELECT VERSION(), CURRENT_DATE; mysql> select version(), current_date; mysql> SeLeCt vErSiOn(), current_DATE;
Here's another query. It demonstrates that you can use mysql as a
simple calculator:
mysql> SELECT SIN(PI()/4), (4+1)*5; +-------------+---------+ | SIN(PI()/4) | (4+1)*5 | +-------------+---------+ | 0.707107 | 25 | +-------------+---------+
The commands shown thus far have been relatively short, single-line statements. You can even enter multiple statements on a single line. Just end each one with a semicolon:
mysql> SELECT VERSION(); SELECT NOW(); +--------------+ | VERSION() | +--------------+ | 3.22.20a-log | +--------------+ +---------------------+ | NOW() | +---------------------+ | 1999-03-19 00:15:33 | +---------------------+
A command need not be given all on a single line, so lengthy commands that
require several lines are not a problem. mysql determines where your
statement ends by looking for the terminating semicolon, not by looking for
the end of the input line. (In other words, mysql
accepts free-format input: it collects input lines but does not execute them
until it sees the semicolon.)
Here's a simple multiple-line statement:
mysql> SELECT
-> USER()
-> ,
-> CURRENT_DATE;
+--------------------+--------------+
| USER() | CURRENT_DATE |
+--------------------+--------------+
| joesmith@localhost | 1999-03-18 |
+--------------------+--------------+
In this example, notice how the prompt changes from mysql> to
-> after you enter the first line of a multiple-line query. This is
how mysql indicates that it hasn't seen a complete statement and is
waiting for the rest. The prompt is your friend, because it provides
valuable feedback. If you use that feedback, you will always be aware of
what mysql is waiting for.
If you decide you don't want to execute a command that you are in the
process of entering, cancel it by typing \c:
mysql> SELECT
-> USER()
-> \c
mysql>
Here, too, notice the prompt. It switches back to mysql> after you
type \c, providing feedback to indicate that mysql is ready
for a new command.
The following table shows each of the prompts you may see and summarises what
they mean about the state that mysql is in:
| Prompt | Meaning |
mysql> | Ready for new command. |
-> | Waiting for next line of multiple-line command. |
'> | Waiting for next line, collecting a string that begins with a single quote (`''). |
"> | Waiting for next line, collecting a string that begins with a double quote (`"'). |
Multiple-line statements commonly occur by accident when you intend to
issue a command on a single line, but forget the terminating semicolon. In
this case, mysql waits for more input:
mysql> SELECT USER()
->
If this happens to you (you think you've entered a statement but the only
response is a -> prompt), most likely mysql is waiting for the
semicolon. If you don't notice what the prompt is telling you, you might sit
there for a while before realising what you need to do. Enter a semicolon to
complete the statement, and mysql will execute it:
mysql> SELECT USER()
-> ;
+--------------------+
| USER() |
+--------------------+
| joesmith@localhost |
+--------------------+
The '> and "> prompts occur during string collection.
In MySQL, you can write strings surrounded by either `''
or `"' characters (for example, 'hello' or "goodbye"),
and mysql lets you enter strings that span multiple lines. When you
see a '> or "> prompt, it means that you've entered a line
containing a string that begins with a `'' or `"' quote character,
but have not yet entered the matching quote that terminates the string.
That's fine if you really are entering a multiple-line string, but how likely
is that? Not very. More often, the '> and "> prompts indicate
that you've inadvertantly left out a quote character. For example:
mysql> SELECT * FROM my_table WHERE name = "Smith AND age < 30;
">
If you enter this SELECT statement, then press Enter and wait for the
result, nothing will happen. Instead of wondering why this
query takes so long, notice the clue provided by the "> prompt. It
tells you that mysql expects to see the rest of an unterminated
string. (Do you see the error in the statement? The string "Smith is
missing the second quote.)
At this point, what do you do? The simplest thing is to cancel the command.
However, you cannot just type \c in this case, because mysql
interprets it as part of the string that it is collecting! Instead, enter
the closing quote character (so mysql knows you've finished the
string), then type \c:
mysql> SELECT * FROM my_table WHERE name = "Smith AND age < 30;
"> "\c
mysql>
The prompt changes back to mysql>, indicating that mysql
is ready for a new command.
It's important to know what the '> and "> prompts signify,
because if you mistakenly enter an unterminated string, any further lines you
type will appear to be ignored by mysql@-including a line
containing QUIT! This can be quite confusing, especially if you
don't know that you need to supply the terminating quote before you can
cancel the current command.
Now that you know how to enter commands, it's time to access a database.
Suppose you have several pets in your home (your menagerie) and you'd like to keep track of various types of information about them. You can do so by creating tables to hold your data and loading them with the desired information. Then you can answer different sorts of questions about your animals by retrieving data from the tables. This section shows you how to:
The menagerie database will be simple (deliberately), but it is not difficult
to think of real-world situations in which a similar type of database might
be used. For example, a database like this could be used by a farmer to keep
track of livestock, or by a veterinarian to keep track of patient records.
A menagerie distribution containing some of the queries and sample data used
in the following sections can be obtained from the MySQL web site.
It's available in either compressed tar format
(http://www.mysql.com/Downloads/Contrib/Examples/menagerie.tar.gz)
or Zip format
(http://www.mysql.com/Downloads/Contrib/Examples/menagerie.zip).
Use the SHOW statement to find out what databases currently exist
on the server:
mysql> SHOW DATABASES; +----------+ | Database | +----------+ | mysql | | test | | tmp | +----------+
The list of databases is probably different on your machine, but the
mysql and test databases are likely to be among them. The
mysql database is required because it describes user access
privileges. The test database is often provided as a workspace for
users to try things out.
Note that you may not see all databases if you don't have the
SHOW DATABASES privilege. See section 4.3.1 GRANT and REVOKE Syntax.
If the test database exists, try to access it:
mysql> USE test Database changed
Note that USE, like QUIT, does not require a semicolon. (You
can terminate such statements with a semicolon if you like; it does no harm.)
The USE statement is special in another way, too: it must be given on
a single line.
You can use the test database (if you have access to it) for the
examples that follow, but anything you create in that database can be
removed by anyone else with access to it. For this reason, you should
probably ask your MySQL administrator for permission to use a
database of your own. Suppose you want to call yours menagerie. The
administrator needs to execute a command like this:
mysql> GRANT ALL ON menagerie.* TO your_mysql_name;
where your_mysql_name is the MySQL user name assigned to
you.
If the administrator creates your database for you when setting up your permissions, you can begin using it. Otherwise, you need to create it yourself:
mysql> CREATE DATABASE menagerie;
Under Unix, database names are case-sensitive (unlike SQL keywords), so you
must always refer to your database as menagerie, not as
Menagerie, MENAGERIE, or some other variant. This is also true
for table names. (Under Windows, this restriction does not apply, although
you must refer to databases and tables using the same lettercase throughout a
given query.)
Creating a database does not select it for use; you must do that explicitly.
To make menagerie the current database, use this command:
mysql> USE menagerie Database changed
Your database needs to be created only once, but you must select it for use
each time you begin a mysql session. You can do this by issuing a
USE statement as shown above. Alternatively, you can select the
database on the command-line when you invoke mysql. Just specify its
name after any connection parameters that you might need to provide. For
example:
shell> mysql -h host -u user -p menagerie Enter password: ********
Note that menagerie is not your password on the command just shown.
If you want to supply your password on the command-line after the -p
option, you must do so with no intervening space (for example, as
-pmypassword, not as -p mypassword). However, putting your
password on the command-line is not recommended, because doing so exposes it
to snooping by other users logged in on your machine.
Creating the database is the easy part, but at this point it's empty, as
SHOW TABLES will tell you:
mysql> SHOW TABLES; Empty set (0.00 sec)
The harder part is deciding what the structure of your database should be: what tables you will need and what columns will be in each of them.
You'll want a table that contains a record for each of your pets. This can
be called the pet table, and it should contain, as a bare minimum,
each animal's name. Because the name by itself is not very interesting, the
table should contain other information. For example, if more than one person
in your family keeps pets, you might want to list each animal's owner. You
might also want to record some basic descriptive information such as species
and sex.
How about age? That might be of interest, but it's not a good thing to store in a database. Age changes as time passes, which means you'd have to update your records often. Instead, it's better to store a fixed value such as date of birth. Then, whenever you need age, you can calculate it as the difference between the current date and the birth date. MySQL provides functions for doing date arithmetic, so this is not difficult. Storing birth date rather than age has other advantages, too:
You can probably think of other types of information that would be useful in
the pet table, but the ones identified so far are sufficient for now:
name, owner, species, sex, birth, and death.
Use a CREATE TABLE statement to specify the layout of your table:
mysql> CREATE TABLE pet (name VARCHAR(20), owner VARCHAR(20),
-> species VARCHAR(20), sex CHAR(1), birth DATE, death DATE);
VARCHAR is a good choice for the name, owner, and
species columns because the column values will vary in length. The
lengths of those columns need not all be the same, and need not be
20. You can pick any length from 1 to 255, whatever
seems most reasonable to you. (If you make a poor choice and it turns
out later that you need a longer field, MySQL provides an
ALTER TABLE statement.)
Several types of values can be chosen to represent sex in animal records,
such as "m"
and "f", or perhaps "male" and "female". It's simplest
to use the single characters "m" and "f".
The use of the DATE data type for the birth and death
columns is a fairly obvious choice.
Now that you have created a table, SHOW TABLES should produce some
output:
mysql> SHOW TABLES; +---------------------+ | Tables in menagerie | +---------------------+ | pet | +---------------------+
To verify that your table was created the way you expected, use
a DESCRIBE statement:
mysql> DESCRIBE pet; +---------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +---------+-------------+------+-----+---------+-------+ | name | varchar(20) | YES | | NULL | | | owner | varchar(20) | YES | | NULL | | | species | varchar(20) | YES | | NULL | | | sex | char(1) | YES | | NULL | | | birth | date | YES | | NULL | | | death | date | YES | | NULL | | +---------+-------------+------+-----+---------+-------+
You can use DESCRIBE any time, for example, if you forget the names of
the columns in your table or what types they are.
After creating your table, you need to populate it. The LOAD DATA and
INSERT statements are useful for this.
Suppose your pet records can be described as shown here.
(Observe that MySQL expects dates in 'YYYY-MM-DD' format;
this may be different from what you are used to.)
| name | owner | species | sex | birth | death |
| Fluffy | Harold | cat | f | 1993-02-04 | |
| Claws | Gwen | cat | m | 1994-03-17 | |
| Buffy | Harold | dog | f | 1989-05-13 | |
| Fang | Benny | dog | m | 1990-08-27 | |
| Bowser | Diane | dog | m | 1998-08-31 | 1995-07-29 |
| Chirpy | Gwen | bird | f | 1998-09-11 | |
| Whistler | Gwen | bird | 1997-12-09 | ||
| Slim | Benny | snake | m | 1996-04-29 |
Because you are beginning with an empty table, an easy way to populate it is to create a text file containing a row for each of your animals, then load the contents of the file into the table with a single statement.
You could create a text file `pet.txt' containing one record per line,
with values separated by tabs, and given in the order in which the columns
were listed in the CREATE TABLE statement. For missing values (such
as unknown sexes or death dates for animals that are still living), you can
use NULL values. To represent these in your text file, use
\N. For example, the record for Whistler the bird would look like
this (where the whitespace between values is a single tab character):
| name | owner | species | sex | birth | death |
Whistler | Gwen | bird | \N | 1997-12-09 | \N
|
To load the text file `pet.txt' into the pet table, use this
command:
mysql> LOAD DATA LOCAL INFILE "pet.txt" INTO TABLE pet;
You can specify the column value separator and end of line marker explicitly
in the LOAD DATA statement if you wish, but the defaults are tab and
linefeed. These are sufficient for the statement to read the file
`pet.txt' properly.
When you want to add new records one at a time, the INSERT statement
is useful. In its simplest form, you supply values for each column, in the
order in which the columns were listed in the CREATE TABLE statement.
Suppose Diane gets a new hamster named Puffball. You could add a new record
using an INSERT statement like this:
mysql> INSERT INTO pet
-> VALUES ('Puffball','Diane','hamster','f','1999-03-30',NULL);
Note that string and date values are specified as quoted strings here. Also,
with INSERT, you can insert NULL directly to represent a
missing value. You do not use \N like you do with LOAD DATA.
From this example, you should be able to see that there would be a lot more
typing involved to load
your records initially using several INSERT statements rather
than a single LOAD DATA statement.
The SELECT statement is used to pull information from a table.
The general form of the statement is:
SELECT what_to_select FROM which_table WHERE conditions_to_satisfy
what_to_select indicates what you want to see. This can be a list of
columns, or * to indicate ``all columns.'' which_table
indicates the table from which you want to retrieve data. The WHERE
clause is optional. If it's present, conditions_to_satisfy specifies
conditions that rows must satisfy to qualify for retrieval.
The simplest form of SELECT retrieves everything from a table:
mysql> SELECT * FROM pet; +----------+--------+---------+------+------------+------------+ | name | owner | species | sex | birth | death | +----------+--------+---------+------+------------+------------+ | Fluffy | Harold | cat | f | 1993-02-04 | NULL | | Claws | Gwen | cat | m | 1994-03-17 | NULL | | Buffy | Harold | dog | f | 1989-05-13 | NULL | | Fang | Benny | dog | m | 1990-08-27 | NULL | | Bowser | Diane | dog | m | 1998-08-31 | 1995-07-29 | | Chirpy | Gwen | bird | f | 1998-09-11 | NULL | | Whistler | Gwen | bird | NULL | 1997-12-09 | NULL | | Slim | Benny | snake | m | 1996-04-29 | NULL | | Puffball | Diane | hamster | f | 1999-03-30 | NULL | +----------+--------+---------+------+------------+------------+
This form of SELECT is useful if you want to review your entire table,
for instance, after you've just loaded it with your initial dataset. As it
happens, the output just shown reveals an error in your datafile: Bowser
appears to have been born after he died! Consulting your original pedigree
papers, you find that the correct birth year is 1989, not 1998.
There are least a couple of ways to fix this:
DELETE and LOAD DATA:
mysql> SET AUTOCOMMIT=1; # Used for quick re-create of the table mysql> DELETE FROM pet; mysql> LOAD DATA LOCAL INFILE "pet.txt" INTO TABLE pet;However, if you do this, you must also re-enter the record for Puffball.
UPDATE statement:
mysql> UPDATE pet SET birth = "1989-08-31" WHERE name = "Bowser";
As shown above, it is easy to retrieve an entire table. But typically you don't want to do that, particularly when the table becomes large. Instead, you're usually more interested in answering a particular question, in which case you specify some constraints on the information you want. Let's look at some selection queries in terms of questions about your pets that they answer.
You can select only particular rows from your table. For example, if you want to verify the change that you made to Bowser's birth date, select Bowser's record like this:
mysql> SELECT * FROM pet WHERE name = "Bowser"; +--------+-------+---------+------+------------+------------+ | name | owner | species | sex | birth | death | +--------+-------+---------+------+------------+------------+ | Bowser | Diane | dog | m | 1989-08-31 | 1995-07-29 | +--------+-------+---------+------+------------+------------+
The output confirms that the year is correctly recorded now as 1989, not 1998.
String comparisons are normally case-insensitive, so you can specify the
name as "bowser", "BOWSER", etc. The query result will be
the same.
You can specify conditions on any column, not just name. For example,
if you want to know which animals were born after 1998, test the birth
column:
mysql> SELECT * FROM pet WHERE birth >= "1998-1-1"; +----------+-------+---------+------+------------+-------+ | name | owner | species | sex | birth | death | +----------+-------+---------+------+------------+-------+ | Chirpy | Gwen | bird | f | 1998-09-11 | NULL | | Puffball | Diane | hamster | f | 1999-03-30 | NULL | +----------+-------+---------+------+------------+-------+
You can combine conditions, for example, to locate female dogs:
mysql> SELECT * FROM pet WHERE species = "dog" AND sex = "f"; +-------+--------+---------+------+------------+-------+ | name | owner | species | sex | birth | death | +-------+--------+---------+------+------------+-------+ | Buffy | Harold | dog | f | 1989-05-13 | NULL | +-------+--------+---------+------+------------+-------+
The preceding query uses the AND logical operator. There is also an
OR operator:
mysql> SELECT * FROM pet WHERE species = "snake" OR species = "bird"; +----------+-------+---------+------+------------+-------+ | name | owner | species | sex | birth | death | +----------+-------+---------+------+------------+-------+ | Chirpy | Gwen | bird | f | 1998-09-11 | NULL | | Whistler | Gwen | bird | NULL | 1997-12-09 | NULL | | Slim | Benny | snake | m | 1996-04-29 | NULL | +----------+-------+---------+------+------------+-------+
AND and OR may be intermixed. If you do that, it's a good idea
to use parentheses to indicate how conditions should be grouped:
mysql> SELECT * FROM pet WHERE (species = "cat" AND sex = "m")
-> OR (species = "dog" AND sex = "f");
+-------+--------+---------+------+------------+-------+
| name | owner | species | sex | birth | death |
+-------+--------+---------+------+------------+-------+
| Claws | Gwen | cat | m | 1994-03-17 | NULL |
| Buffy | Harold | dog | f | 1989-05-13 | NULL |
+-------+--------+---------+------+------------+-------+
If you don't want to see entire rows from your table, just name the columns
in which you're interested, separated by commas. For example, if you want to
know when your animals were born, select the name and birth
columns:
mysql> SELECT name, birth FROM pet; +----------+------------+ | name | birth | +----------+------------+ | Fluffy | 1993-02-04 | | Claws | 1994-03-17 | | Buffy | 1989-05-13 | | Fang | 1990-08-27 | | Bowser | 1989-08-31 | | Chirpy | 1998-09-11 | | Whistler | 1997-12-09 | | Slim | 1996-04-29 | | Puffball | 1999-03-30 | +----------+------------+
To find out who owns pets, use this query:
mysql> SELECT owner FROM pet; +--------+ | owner | +--------+ | Harold | | Gwen | | Harold | | Benny | | Diane | | Gwen | | Gwen | | Benny | | Diane | +--------+
However, notice that the query simply retrieves the owner field from
each record, and some of them appear more than once. To minimise the output,
retrieve each unique output record just once by adding the keyword
DISTINCT:
mysql> SELECT DISTINCT owner FROM pet; +--------+ | owner | +--------+ | Benny | | Diane | | Gwen | | Harold | +--------+
You can use a WHERE clause to combine row selection with column
selection. For example, to get birth dates for dogs and cats only,
use this query:
mysql> SELECT name, species, birth FROM pet
-> WHERE species = "dog" OR species = "cat";
+--------+---------+------------+
| name | species | birth |
+--------+---------+------------+
| Fluffy | cat | 1993-02-04 |
| Claws | cat | 1994-03-17 |
| Buffy | dog | 1989-05-13 |
| Fang | dog | 1990-08-27 |
| Bowser | dog | 1989-08-31 |
+--------+---------+------------+
You may have noticed in the preceding examples that the result rows are
displayed in no particular order. However, it's often easier to examine
query output when the rows are sorted in some meaningful way. To sort a
result, use an ORDER BY clause.
Here are animal birthdays, sorted by date:
mysql> SELECT name, birth FROM pet ORDER BY birth; +----------+------------+ | name | birth | +----------+------------+ | Buffy | 1989-05-13 | | Bowser | 1989-08-31 | | Fang | 1990-08-27 | | Fluffy | 1993-02-04 | | Claws | 1994-03-17 | | Slim | 1996-04-29 | | Whistler | 1997-12-09 | | Chirpy | 1998-09-11 | | Puffball | 1999-03-30 | +----------+------------+
On character type columns, sorting@-like all other comparison
operations@-is normally performed in a case-insensitive fashion.
This means that the order will be undefined for columns that are identical
except for their case. You can force a case-sensitive sort by using the
BINARY cast: ORDER BY BINARY(field).
To sort in reverse order, add the DESC (descending) keyword to the
name of the column you are sorting by:
mysql> SELECT name, birth FROM pet ORDER BY birth DESC; +----------+------------+ | name | birth | +----------+------------+ | Puffball | 1999-03-30 | | Chirpy | 1998-09-11 | | Whistler | 1997-12-09 | | Slim | 1996-04-29 | | Claws | 1994-03-17 | | Fluffy | 1993-02-04 | | Fang | 1990-08-27 | | Bowser | 1989-08-31 | | Buffy | 1989-05-13 | +----------+------------+
You can sort on multiple columns. For example, to sort by type of animal, then by birth date within animal type with youngest animals first, use the following query:
mysql> SELECT name, species, birth FROM pet ORDER BY species, birth DESC; +----------+---------+------------+ | name | species | birth | +----------+---------+------------+ | Chirpy | bird | 1998-09-11 | | Whistler | bird | 1997-12-09 | | Claws | cat | 1994-03-17 | | Fluffy | cat | 1993-02-04 | | Fang | dog | 1990-08-27 | | Bowser | dog | 1989-08-31 | | Buffy | dog | 1989-05-13 | | Puffball | hamster | 1999-03-30 | | Slim | snake | 1996-04-29 | +----------+---------+------------+
Note that the DESC keyword applies only to the column name immediately
preceding it (birth); species values are still sorted in
ascending order.
MySQL provides several functions that you can use to perform calculations on dates, for example, to calculate ages or extract parts of dates.
To determine how many years old each of your pets is, compute the difference in the year part of the current date and the birth date, then subtract one if the current date occurs earlier in the calendar year than the birth date. The following query shows, for each pet, the birth date, the current date, and the age in years.
mysql> SELECT name, birth, CURRENT_DATE,
-> (YEAR(CURRENT_DATE)-YEAR(birth))
-> - (RIGHT(CURRENT_DATE,5)<RIGHT(birth,5))
-> AS age
-> FROM pet;
+----------+------------+--------------+------+
| name | birth | CURRENT_DATE | age |
+----------+------------+--------------+------+
| Fluffy | 1993-02-04 | 2001-08-29 | 8 |
| Claws | 1994-03-17 | 2001-08-29 | 7 |
| Buffy | 1989-05-13 | 2001-08-29 | 12 |
| Fang | 1990-08-27 | 2001-08-29 | 11 |
| Bowser | 1989-08-31 | 2001-08-29 | 11 |
| Chirpy | 1998-09-11 | 2001-08-29 | 2 |
| Whistler | 1997-12-09 | 2001-08-29 | 3 |
| Slim | 1996-04-29 | 2001-08-29 | 5 |
| Puffball | 1999-03-30 | 2001-08-29 | 2 |
+----------+------------+--------------+------+
Here, YEAR() pulls out the year part of a date and RIGHT()
pulls off the rightmost five characters that represent the MM-DD
(calendar year) part of the date. The part of the expression that
compares the MM-DD values evaluates to 1 or 0, which adjusts the
year difference down a year if CURRENT_DATE occurs earlier in
the year than birth. The full expression is somewhat ungainly,
so an alias (age) is used to make the output column label more
meaningful.
The query works, but the result could be scanned more easily if the rows
were presented in some order. This can be done by adding an ORDER
BY name clause to sort the output by name:
mysql> SELECT name, birth, CURRENT_DATE,
-> (YEAR(CURRENT_DATE)-YEAR(birth))
-> - (RIGHT(CURRENT_DATE,5)<RIGHT(birth,5))
-> AS age
-> FROM pet ORDER BY name;
+----------+------------+--------------+------+
| name | birth | CURRENT_DATE | age |
+----------+------------+--------------+------+
| Bowser | 1989-08-31 | 2001-08-29 | 11 |
| Buffy | 1989-05-13 | 2001-08-29 | 12 |
| Chirpy | 1998-09-11 | 2001-08-29 | 2 |
| Claws | 1994-03-17 | 2001-08-29 | 7 |
| Fang | 1990-08-27 | 2001-08-29 | 11 |
| Fluffy | 1993-02-04 | 2001-08-29 | 8 |
| Puffball | 1999-03-30 | 2001-08-29 | 2 |
| Slim | 1996-04-29 | 2001-08-29 | 5 |
| Whistler | 1997-12-09 | 2001-08-29 | 3 |
+----------+------------+--------------+------+
To sort the output by age rather than name, just use a
different ORDER BY clause:
mysql> SELECT name, birth, CURRENT_DATE,
-> (YEAR(CURRENT_DATE)-YEAR(birth))
-> - (RIGHT(CURRENT_DATE,5)<RIGHT(birth,5))
-> AS age
-> FROM pet ORDER BY age;
+----------+------------+--------------+------+
| name | birth | CURRENT_DATE | age |
+----------+------------+--------------+------+
| Chirpy | 1998-09-11 | 2001-08-29 | 2 |
| Puffball | 1999-03-30 | 2001-08-29 | 2 |
| Whistler | 1997-12-09 | 2001-08-29 | 3 |
| Slim | 1996-04-29 | 2001-08-29 | 5 |
| Claws | 1994-03-17 | 2001-08-29 | 7 |
| Fluffy | 1993-02-04 | 2001-08-29 | 8 |
| Fang | 1990-08-27 | 2001-08-29 | 11 |
| Bowser | 1989-08-31 | 2001-08-29 | 11 |
| Buffy | 1989-05-13 | 2001-08-29 | 12 |
+----------+------------+--------------+------+
A similar query can be used to determine age at death for animals that have
died. You determine which animals these are by checking whether the
death value is NULL. Then, for those with non-NULL
values, compute the difference between the death and birth
values:
mysql> SELECT name, birth, death,
-> (YEAR(death)-YEAR(birth)) - (RIGHT(death,5)<RIGHT(birth,5))
-> AS age
-> FROM pet WHERE death IS NOT NULL ORDER BY age;
+--------+------------+------------+------+
| name | birth | death | age |
+--------+------------+------------+------+
| Bowser | 1989-08-31 | 1995-07-29 | 5 |
+--------+------------+------------+------+
The query uses death IS NOT NULL rather than death <> NULL
because NULL is a special value. This is explained later.
See section 3.3.4.6 Working with NULL Values.
What if you want to know which animals have birthdays next month? For this
type of calculation, year and day are irrelevant; you simply want to extract
the month part of the birth column. MySQL provides several
date-part extraction functions, such as YEAR(), MONTH(), and
DAYOFMONTH(). MONTH() is the appropriate function here. To
see how it works, run a simple query that displays the value of both
birth and MONTH(birth):
mysql> SELECT name, birth, MONTH(birth) FROM pet; +----------+------------+--------------+ | name | birth | MONTH(birth) | +----------+------------+--------------+ | Fluffy | 1993-02-04 | 2 | | Claws | 1994-03-17 | 3 | | Buffy | 1989-05-13 | 5 | | Fang | 1990-08-27 | 8 | | Bowser | 1989-08-31 | 8 | | Chirpy | 1998-09-11 | 9 | | Whistler | 1997-12-09 | 12 | | Slim | 1996-04-29 | 4 | | Puffball | 1999-03-30 | 3 | +----------+------------+--------------+
Finding animals with birthdays in the upcoming month is easy, too. Suppose
the current month is April. Then the month value is 4 and you look
for animals born in May (month 5) like this:
mysql> SELECT name, birth FROM pet WHERE MONTH(birth) = 5; +-------+------------+ | name | birth | +-------+------------+ | Buffy | 1989-05-13 | +-------+------------+
There is a small complication if the current month is December, of course.
You don't just add one to the month number (12) and look for animals
born in month 13, because there is no such month. Instead, you look for
animals born in January (month 1).
You can even write the query so that it works no matter what the current
month is. That way you don't have to use a particular month number
in the query. DATE_ADD() allows you to add a time interval to a
given date. If you add a month to the value of NOW(), then extract
the month part with MONTH(), the result produces the month in which to
look for birthdays:
mysql> SELECT name, birth FROM pet
-> WHERE MONTH(birth) = MONTH(DATE_ADD(NOW(), INTERVAL 1 MONTH));
A different way to accomplish the same task is to add 1 to get the
next month after the current one (after using the modulo function (MOD)
to wrap around the month value to 0 if it is currently
12):
mysql> SELECT name, birth FROM pet
-> WHERE MONTH(birth) = MOD(MONTH(NOW()), 12) + 1;
Note that MONTH returns a number between 1 and 12. And
MOD(something,12) returns a number between 0 and 11. So the
addition has to be after the MOD(), otherwise we would go from
November (11) to January (1).
NULL Values
The NULL value can be surprising until you get used to it.
Conceptually, NULL means missing value or unknown value and it
is treated somewhat differently than other values. To test for NULL,
you cannot use the arithmetic comparison operators such as =, <,
or <>. To demonstrate this for yourself, try the following query:
mysql> SELECT 1 = NULL, 1 <> NULL, 1 < NULL, 1 > NULL; +----------+-----------+----------+----------+ | 1 = NULL | 1 <> NULL | 1 < NULL | 1 > NULL | +----------+-----------+----------+----------+ | NULL | NULL | NULL | NULL | +----------+-----------+----------+----------+
Clearly you get no meaningful results from these comparisons. Use
the IS NULL and IS NOT NULL operators instead:
mysql> SELECT 1 IS NULL, 1 IS NOT NULL; +-----------+---------------+ | 1 IS NULL | 1 IS NOT NULL | +-----------+---------------+ | 0 | 1 | +-----------+---------------+
Note that in MySQL, 0 or NULL means false and anything else means
true. The default truth value from a boolean operation is 1.
This special treatment of NULL is why, in the previous section, it
was necessary to determine which animals are no longer alive using
death IS NOT NULL instead of death <> NULL.
Two NULL values are regarded as equal in a GROUP BY.
When doing an ORDER BY, NULL values are presented first if you
do ORDER BY ... ASC and last if you do ORDER BY ... DESC.
Note that between MySQL 4.0.2 - 4.0.10, NULL values incorrectly
were always sorted first regardless of the sort direction.
MySQL provides standard SQL pattern matching as well as a form of
pattern matching based on extended regular expressions similar to those used
by Unix utilities such as vi, grep, and sed.
SQL pattern matching allows you to use `_' to match any single
character and `%' to match an arbitrary number of characters (including
zero characters). In MySQL, SQL patterns are case-insensitive by
default. Some examples are shown here. Note that you do not use =
or <> when you use SQL patterns; use the LIKE or NOT
LIKE comparison operators instead.
To find names beginning with `b':
mysql> SELECT * FROM pet WHERE name LIKE "b%"; +--------+--------+---------+------+------------+------------+ | name | owner | species | sex | birth | death | +--------+--------+---------+------+------------+------------+ | Buffy | Harold | dog | f | 1989-05-13 | NULL | | Bowser | Diane | dog | m | 1989-08-31 | 1995-07-29 | +--------+--------+---------+------+------------+------------+
To find names ending with `fy':
mysql> SELECT * FROM pet WHERE name LIKE "%fy"; +--------+--------+---------+------+------------+-------+ | name | owner | species | sex | birth | death | +--------+--------+---------+------+------------+-------+ | Fluffy | Harold | cat | f | 1993-02-04 | NULL | | Buffy | Harold | dog | f | 1989-05-13 | NULL | +--------+--------+---------+------+------------+-------+
To find names containing a `w':
mysql> SELECT * FROM pet WHERE name LIKE "%w%"; +----------+-------+---------+------+------------+------------+ | name | owner | species | sex | birth | death | +----------+-------+---------+------+------------+------------+ | Claws | Gwen | cat | m | 1994-03-17 | NULL | | Bowser | Diane | dog | m | 1989-08-31 | 1995-07-29 | | Whistler | Gwen | bird | NULL | 1997-12-09 | NULL | +----------+-------+---------+------+------------+------------+
To find names containing exactly five characters, use the `_' pattern character:
mysql> SELECT * FROM pet WHERE name LIKE "_____"; +-------+--------+---------+------+------------+-------+ | name | owner | species | sex | birth | death | +-------+--------+---------+------+------------+-------+ | Claws | Gwen | cat | m | 1994-03-17 | NULL | | Buffy | Harold | dog | f | 1989-05-13 | NULL | +-------+--------+---------+------+------------+-------+
The other type of pattern matching provided by MySQL uses extended
regular expressions. When you test for a match for this type of pattern, use
the REGEXP and NOT REGEXP operators (or RLIKE and
NOT RLIKE, which are synonyms).
Some characteristics of extended regular expressions are:
To demonstrate how extended regular expressions work, the LIKE queries
shown previously are rewritten here to use REGEXP.
To find names beginning with `b', use `^' to match the beginning of the name:
mysql> SELECT * FROM pet WHERE name REGEXP "^b"; +--------+--------+---------+------+------------+------------+ | name | owner | species | sex | birth | death | +--------+--------+---------+------+------------+------------+ | Buffy | Harold | dog | f | 1989-05-13 | NULL | | Bowser | Diane | dog | m | 1989-08-31 | 1995-07-29 | +--------+--------+---------+------+------------+------------+
Prior to MySQL Version 3.23.4, REGEXP is case-sensitive,
and the previous query will return no rows. To match either lowercase or
uppercase `b', use this query instead:
mysql> SELECT * FROM pet WHERE name REGEXP "^[bB]";
From MySQL 3.23.4 on, to force a REGEXP comparison to
be case-sensitive, use the BINARY keyword to make one of the
strings a binary string. This query will match only lowercase `b'
at the beginning of a name:
mysql> SELECT * FROM pet WHERE name REGEXP BINARY "^b";
To find names ending with `fy', use `$' to match the end of the name:
mysql> SELECT * FROM pet WHERE name REGEXP "fy$"; +--------+--------+---------+------+------------+-------+ | name | owner | species | sex | birth | death | +--------+--------+---------+------+------------+-------+ | Fluffy | Harold | cat | f | 1993-02-04 | NULL | | Buffy | Harold | dog | f | 1989-05-13 | NULL | +--------+--------+---------+------+------------+-------+
To find names containing a lowercase or uppercase `w', use this query:
mysql> SELECT * FROM pet WHERE name REGEXP "w"; +----------+-------+---------+------+------------+------------+ | name | owner | species | sex | birth | death | +----------+-------+---------+------+------------+------------+ | Claws | Gwen | cat | m | 1994-03-17 | NULL | | Bowser | Diane | dog | m | 1989-08-31 | 1995-07-29 | | Whistler | Gwen | bird | NULL | 1997-12-09 | NULL | +----------+-------+---------+------+------------+------------+
Because a regular expression pattern matches if it occurs anywhere in the value, it is not necessary in the previous query to put a wildcard on either side of the pattern to get it to match the entire value like it would be if you used a SQL pattern.
To find names containing exactly five characters, use `^' and `$' to match the beginning and end of the name, and five instances of `.' in between:
mysql> SELECT * FROM pet WHERE name REGEXP "^.....$"; +-------+--------+---------+------+------------+-------+ | name | owner | species | sex | birth | death | +-------+--------+---------+------+------------+-------+ | Claws | Gwen | cat | m | 1994-03-17 | NULL | | Buffy | Harold | dog | f | 1989-05-13 | NULL | +-------+--------+---------+------+------------+-------+
You could also write the previous query using the `{n}'
``repeat-n-times'' operator:
mysql> SELECT * FROM pet WHERE name REGEXP "^.{5}$";
+-------+--------+---------+------+------------+-------+
| name | owner | species | sex | birth | death |
+-------+--------+---------+------+------------+-------+
| Claws | Gwen | cat | m | 1994-03-17 | NULL |
| Buffy | Harold | dog | f | 1989-05-13 | NULL |
+-------+--------+---------+------+------------+-------+
Databases are often used to answer the question, ``How often does a certain type of data occur in a table?'' For example, you might want to know how many pets you have, or how many pets each owner has, or you might want to perform various kinds of censuses on your animals.
Counting the total number of animals you have is the same question as ``How
many rows are in the pet table?'' because there is one record per pet.
The COUNT() function counts the number of non-NULL results, so
the query to count your animals looks like this:
mysql> SELECT COUNT(*) FROM pet; +----------+ | COUNT(*) | +----------+ | 9 | +----------+
Earlier, you retrieved the names of the people who owned pets. You can
use COUNT() if you want to find out how many pets each owner has:
mysql> SELECT owner, COUNT(*) FROM pet GROUP BY owner; +--------+----------+ | owner | COUNT(*) | +--------+----------+ | Benny | 2 | | Diane | 2 | | Gwen | 3 | | Harold | 2 | +--------+----------+
Note the use of GROUP BY to group together all records for each
owner. Without it, all you get is an error message:
mysql> SELECT owner, COUNT(owner) FROM pet; ERROR 1140 at line 1: Mixing of GROUP columns (MIN(),MAX(),COUNT()...) with no GROUP columns is illegal if there is no GROUP BY clause
COUNT() and GROUP BY are useful for characterising your
data in various ways. The following examples show different ways to
perform animal census operations.
Number of animals per species:
mysql> SELECT species, COUNT(*) FROM pet GROUP BY species; +---------+----------+ | species | COUNT(*) | +---------+----------+ | bird | 2 | | cat | 2 | | dog | 3 | | hamster | 1 | | snake | 1 | +---------+----------+
Number of animals per sex:
mysql> SELECT sex, COUNT(*) FROM pet GROUP BY sex; +------+----------+ | sex | COUNT(*) | +------+----------+ | NULL | 1 | | f | 4 | | m | 4 | +------+----------+
(In this output, NULL indicates sex unknown.)
Number of animals per combination of species and sex:
mysql> SELECT species, sex, COUNT(*) FROM pet GROUP BY species, sex; +---------+------+----------+ | species | sex | COUNT(*) | +---------+------+----------+ | bird | NULL | 1 | | bird | f | 1 | | cat | f | 1 | | cat | m | 1 | | dog | f | 1 | | dog | m | 2 | | hamster | f | 1 | | snake | m | 1 | +---------+------+----------+
You need not retrieve an entire table when you use COUNT(). For
example, the previous query, when performed just on dogs and cats, looks like
this:
mysql> SELECT species, sex, COUNT(*) FROM pet
-> WHERE species = "dog" OR species = "cat"
-> GROUP BY species, sex;
+---------+------+----------+
| species | sex | COUNT(*) |
+---------+------+----------+
| cat | f | 1 |
| cat | m | 1 |
| dog | f | 1 |
| dog | m | 2 |
+---------+------+----------+
Or, if you wanted the number of animals per sex only for known-sex animals:
mysql> SELECT species, sex, COUNT(*) FROM pet
-> WHERE sex IS NOT NULL
-> GROUP BY species, sex;
+---------+------+----------+
| species | sex | COUNT(*) |
+---------+------+----------+
| bird | f | 1 |
| cat | f | 1 |
| cat | m | 1 |
| dog | f | 1 |
| dog | m | 2 |
| hamster | f | 1 |
| snake | m | 1 |
+---------+------+----------+
The pet table keeps track of which pets you have. If you want to
record other information about them, such as events in their lives like
visits to the vet or when litters are born, you need another table. What
should this table look like? It needs:
Given these considerations, the CREATE TABLE statement for the
event table might look like this:
mysql> CREATE TABLE event (name VARCHAR(20), date DATE,
-> type VARCHAR(15), remark VARCHAR(255));
As with the pet table, it's easiest to load the initial records
by creating a tab-delimited text file containing the information:
| name | date | type | remark |
| Fluffy | 1995-05-15 | litter | 4 kittens, 3 female, 1 male |
| Buffy | 1993-06-23 | litter | 5 puppies, 2 female, 3 male |
| Buffy | 1994-06-19 | litter | 3 puppies, 3 female |
| Chirpy | 1999-03-21 | vet | needed beak straightened |
| Slim | 1997-08-03 | vet | broken rib |
| Bowser | 1991-10-12 | kennel | |
| Fang | 1991-10-12 | kennel | |
| Fang | 1998-08-28 | birthday | Gave him a new chew toy |
| Claws | 1998-03-17 | birthday | Gave him a new flea collar |
| Whistler | 1998-12-09 | birthday | First birthday |
Load the records like this:
mysql> LOAD DATA LOCAL INFILE "event.txt" INTO TABLE event;
Based on what you've learned from the queries you've run on the pet
table, you should be able to perform retrievals on the records in the
event table; the principles are the same. But when is the
event table by itself insufficient to answer questions you might ask?
Suppose you want to find out the ages of each pet when they had their
litters. The event table indicates when this occurred, but to
calculate the age of the mother, you need her birth date. Because that is
stored in the pet table, you need both tables for the query:
mysql> SELECT pet.name,
-> (TO_DAYS(date) - TO_DAYS(birth))/365 AS age,
-> remark
-> FROM pet, event
-> WHERE pet.name = event.name AND type = "litter";
+--------+------+-----------------------------+
| name | age | remark |
+--------+------+-----------------------------+
| Fluffy | 2.27 | 4 kittens, 3 female, 1 male |
| Buffy | 4.12 | 5 puppies, 2 female, 3 male |
| Buffy | 5.10 | 3 puppies, 3 female |
+--------+------+-----------------------------+
There are several things to note about this query:
FROM clause lists two tables because the query needs to pull
information from both of them.
name column. The query uses
WHERE clause to match up records in the two tables based on the
name values.
name column occurs in both tables, you must be specific
about which table you mean when referring to the column. This is done
by prepending the table name to the column name.
You need not have two different tables to perform a join. Sometimes it is
useful to join a table to itself, if you want to compare records in a table
to other records in that same table. For example, to find breeding pairs
among your pets, you can join the pet table with itself to pair up
males and females of like species:
mysql> SELECT p1.name, p1.sex, p2.name, p2.sex, p1.species
-> FROM pet AS p1, pet AS p2
-> WHERE p1.species = p2.species AND p1.sex = "f" AND p2.sex = "m";
+--------+------+--------+------+---------+
| name | sex | name | sex | species |
+--------+------+--------+------+---------+
| Fluffy | f | Claws | m | cat |
| Buffy | f | Fang | m | dog |
| Buffy | f | Bowser | m | dog |
+--------+------+--------+------+---------+
In this query, we specify aliases for the table name in order to refer to the columns and keep straight which instance of the table each column reference is associated with.
What if you forget the name of a database or table, or what the structure of a given table is (for example, what its columns are called)? MySQL addresses this problem through several statements that provide information about the databases and tables it supports.
You have already seen SHOW DATABASES, which lists the databases
managed by the server. To find out which database is currently selected,
use the DATABASE() function:
mysql> SELECT DATABASE(); +------------+ | DATABASE() | +------------+ | menagerie | +------------+
If you haven't selected any database yet, the result is blank.
To find out what tables the current database contains (for example, when you're not sure about the name of a table), use this command:
mysql> SHOW TABLES; +---------------------+ | Tables in menagerie | +---------------------+ | event | | pet | +---------------------+
If you want to find out about the structure of a table, the DESCRIBE
command is useful; it displays information about each of a table's columns:
mysql> DESCRIBE pet; +---------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +---------+-------------+------+-----+---------+-------+ | name | varchar(20) | YES | | NULL | | | owner | varchar(20) | YES | | NULL | | | species | varchar(20) | YES | | NULL | | | sex | char(1) | YES | | NULL | | | birth | date | YES | | NULL | | | death | date | YES | | NULL | | +---------+-------------+------+-----+---------+-------+
Field indicates the column name, Type is the data type for
the column, NULL indicates whether the column can contain
NULL values, Key indicates whether the column is
indexed, and Default specifies the column's default value.
If you have indexes on a table,
SHOW INDEX FROM tbl_name produces information about them.
Here are examples of how to solve some common problems with MySQL.
Some of the examples use the table shop to hold the price of each
article (item number) for certain traders (dealers). Supposing that each
trader has a single fixed price per article, then (article,
dealer) is a primary key for the records.
Start the command-line tool mysql and select a database:
mysql your-database-name
(In most MySQL installations, you can use the database-name 'test').
You can create the example table as:
CREATE TABLE shop ( article INT(4) UNSIGNED ZEROFILL DEFAULT '0000' NOT NULL, dealer CHAR(20) DEFAULT '' NOT NULL, price DOUBLE(16,2) DEFAULT '0.00' NOT NULL, PRIMARY KEY(article, dealer)); INSERT INTO shop VALUES (1,'A',3.45),(1,'B',3.99),(2,'A',10.99),(3,'B',1.45),(3,'C',1.69), (3,'D',1.25),(4,'D',19.95);
Okay, so the example data is:
mysql> SELECT * FROM shop; +---------+--------+-------+ | article | dealer | price | +---------+--------+-------+ | 0001 | A | 3.45 | | 0001 | B | 3.99 | | 0002 | A | 10.99 | | 0003 | B | 1.45 | | 0003 | C | 1.69 | | 0003 | D | 1.25 | | 0004 | D | 19.95 | +---------+--------+-------+
``What's the highest item number?''
SELECT MAX(article) AS article FROM shop +---------+ | article | +---------+ | 4 | +---------+
``Find number, dealer, and price of the most expensive article.''
In SQL-99 (and MySQL Version 4.1) this is easily done with a subquery:
SELECT article, dealer, price FROM shop WHERE price=(SELECT MAX(price) FROM shop)
In MySQL versions prior to 4.1, just do it in two steps:
SELECT statement.
SELECT article, dealer, price FROM shop WHERE price=19.95
Another solution is to sort all rows descending by price and only
get the first row using the MySQL-specific LIMIT clause:
SELECT article, dealer, price FROM shop ORDER BY price DESC LIMIT 1
NOTE: If there are several most expensive articles (for example, each 19.95)
the LIMIT solution shows only one of them!
``What's the highest price per article?''
SELECT article, MAX(price) AS price FROM shop GROUP BY article +---------+-------+ | article | price | +---------+-------+ | 0001 | 3.99 | | 0002 | 10.99 | | 0003 | 1.69 | | 0004 | 19.95 | +---------+-------+
``For each article, find the dealer(s) with the most expensive price.''
In SQL-99 (and MySQL Version 4.1 or greater), I'd do it with a subquery like this:
SELECT article, dealer, price
FROM shop s1
WHERE price=(SELECT MAX(s2.price)
FROM shop s2
WHERE s1.article = s2.article);
In MySQL versions prior to 4.1 it's best do it in several steps:
This can easily be done with a temporary table:
CREATE TEMPORARY TABLE tmp (
article INT(4) UNSIGNED ZEROFILL DEFAULT '0000' NOT NULL,
price DOUBLE(16,2) DEFAULT '0.00' NOT NULL);
LOCK TABLES shop read;
INSERT INTO tmp SELECT article, MAX(price) FROM shop GROUP BY article;
SELECT shop.article, dealer, shop.price FROM shop, tmp
WHERE shop.article=tmp.article AND shop.price=tmp.price;
UNLOCK TABLES;
DROP TABLE tmp;
If you don't use a TEMPORARY table, you must also lock the 'tmp' table.
``Can it be done with a single query?''
Yes, but only by using a quite inefficient trick that I call the ``MAX-CONCAT trick'':
SELECT article,
SUBSTRING( MAX( CONCAT(LPAD(price,6,'0'),dealer) ), 7) AS dealer,
0.00+LEFT( MAX( CONCAT(LPAD(price,6,'0'),dealer) ), 6) AS price
FROM shop
GROUP BY article;
+---------+--------+-------+
| article | dealer | price |
+---------+--------+-------+
| 0001 | B | 3.99 |
| 0002 | A | 10.99 |
| 0003 | C | 1.69 |
| 0004 | D | 19.95 |
+---------+--------+-------+
The last example can, of course, be made a bit more efficient by doing the splitting of the concatenated column in the client.
You can use MySQL user variables to remember results without having to store them in temporary variables in the client. See section 6.1.4 User Variables.
For example, to find the articles with the highest and lowest price you can do:
mysql> SELECT @min_price:=MIN(price),@max_price:=MAX(price) FROM shop; mysql> SELECT * FROM shop WHERE price=@min_price OR price=@max_price; +---------+--------+-------+ | article | dealer | price | +---------+--------+-------+ | 0003 | D | 1.25 | | 0004 | D | 19.95 | +---------+--------+-------+
In MySQL 3.23.44 and up, InnoDB tables supports checking of
foreign key constraints. See section 7.5 InnoDB Tables.
See also section 1.8.4.5 Foreign Keys.
You don't actually need foreign keys to join 2 tables.
The only thing MySQL currently doesn't do (in table types other than
InnoDB), is CHECK to make sure that the keys you use
really exist in the table(s) you're referencing and it
doesn't automatically delete rows from a table with a foreign key
definition. If you use your keys like normal, it'll work just fine:
CREATE TABLE person (
id SMALLINT UNSIGNED NOT NULL AUTO_INCREMENT,
name CHAR(60) NOT NULL,
PRIMARY KEY (id)
);
CREATE TABLE shirt (
id SMALLINT UNSIGNED NOT NULL AUTO_INCREMENT,
style ENUM('t-shirt', 'polo', 'dress') NOT NULL,
color ENUM('red', 'blue', 'orange', 'white', 'black') NOT NULL,
owner SMALLINT UNSIGNED NOT NULL REFERENCES person(id),
PRIMARY KEY (id)
);
INSERT INTO person VALUES (NULL, 'Antonio Paz');
INSERT INTO shirt VALUES
(NULL, 'polo', 'blue', LAST_INSERT_ID()),
(NULL, 'dress', 'white', LAST_INSERT_ID()),
(NULL, 't-shirt', 'blue', LAST_INSERT_ID());
INSERT INTO person VALUES (NULL, 'Lilliana Angelovska');
INSERT INTO shirt VALUES
(NULL, 'dress', 'orange', LAST_INSERT_ID()),
(NULL, 'polo', 'red', LAST_INSERT_ID()),
(NULL, 'dress', 'blue', LAST_INSERT_ID()),
(NULL, 't-shirt', 'white', LAST_INSERT_ID());
SELECT * FROM person;
+----+---------------------+
| id | name |
+----+---------------------+
| 1 | Antonio Paz |
| 2 | Lilliana Angelovska |
+----+---------------------+
SELECT * FROM shirt;
+----+---------+--------+-------+
| id | style | color | owner |
+----+---------+--------+-------+
| 1 | polo | blue | 1 |
| 2 | dress | white | 1 |
| 3 | t-shirt | blue | 1 |
| 4 | dress | orange | 2 |
| 5 | polo | red | 2 |
| 6 | dress | blue | 2 |
| 7 | t-shirt | white | 2 |
+----+---------+--------+-------+
SELECT s.* FROM person p, shirt s
WHERE p.name LIKE 'Lilliana%'
AND s.owner = p.id
AND s.color <> 'white';
+----+-------+--------+-------+
| id | style | color | owner |
+----+-------+--------+-------+
| 4 | dress | orange | 2 |
| 5 | polo | red | 2 |
| 6 | dress | blue | 2 |
+----+-------+--------+-------+
MySQL doesn't yet optimise when you search on two different
keys combined with OR (searching on one key with different OR
parts is optimised quite well):
SELECT field1_index, field2_index FROM test_table WHERE field1_index = '1' OR field2_index = '1'
The reason is that we haven't yet had time to come up with an efficient
way to handle this in the general case. (The AND handling is,
in comparison, now completely general and works very well.)
For the moment you can solve this very efficiently by using a
TEMPORARY table. This type of optimisation is also very good if
you are using very complicated queries where the SQL server does the
optimisations in the wrong order.
CREATE TEMPORARY TABLE tmp SELECT field1_index, field2_index FROM test_table WHERE field1_index = '1'; INSERT INTO tmp SELECT field1_index, field2_index FROM test_table WHERE field2_index = '1'; SELECT * from tmp; DROP TABLE tmp;
The above way to solve this query is in effect a UNION of two queries.
See section 6.4.1.2 UNION Syntax.
The following shows an idea of how you can use the bit group functions to calculate the number of days per month a user has visited a web page.
CREATE TABLE t1 (year YEAR(4), month INT(2) UNSIGNED ZEROFILL,
day INT(2) UNSIGNED ZEROFILL);
INSERT INTO t1 VALUES(2000,1,1),(2000,1,20),(2000,1,30),(2000,2,2),
(2000,2,23),(2000,2,23);
SELECT year,month,BIT_COUNT(BIT_OR(1<<day)) AS days FROM t1
GROUP BY year,month;
Which returns:
+------+-------+------+
| year | month | days |
+------+-------+------+
| 2000 | 01 | 3 |
| 2000 | 02 | 2 |
+------+-------+------+
The above calculates how many different days was used for a given year/month combination, with automatic removal of duplicate entries.
AUTO_INCREMENT
The AUTO_INCREMENT attribute can be used to generate a unique
identity for new rows:
CREATE TABLE animals (
id MEDIUMINT NOT NULL AUTO_INCREMENT,
name CHAR(30) NOT NULL,
PRIMARY KEY (id)
);
INSERT INTO animals (name) VALUES ("dog"),("cat"),("penguin"),
("lax"),("whale");
SELECT * FROM animals;
Which returns:
+----+---------+
| id | name |
+----+---------+
| 1 | dog |
| 2 | cat |
| 3 | penguin |
| 4 | lax |
| 5 | whale |
+----+---------+
You can retrieve the used AUTO_INCREMENT key with the
LAST_INSERT_ID() SQL function or the mysql_insert_id() API
function.
Note: for a multi-row insert,
LAST_INSERT_ID()/mysql_insert_id() will actually return the
AUTO_INCREMENT key from the first inserted row. This allows
multi-row inserts to be reproduced on other servers.
For MyISAM and BDB tables you can specify AUTO_INCREMENT on
secondary column in a multi-column key. In this case the generated
value for the autoincrement column is calculated as
MAX(auto_increment_column)+1) WHERE prefix=given-prefix. This is
useful when you want to put data into ordered groups.
CREATE TABLE animals (
grp ENUM('fish','mammal','bird') NOT NULL,
id MEDIUMINT NOT NULL AUTO_INCREMENT,
name CHAR(30) NOT NULL,
PRIMARY KEY (grp,id)
);
INSERT INTO animals (grp,name) VALUES("mammal","dog"),("mammal","cat"),
("bird","penguin"),("fish","lax"),("mammal","whale");
SELECT * FROM animals ORDER BY grp,id;
Which returns:
+--------+----+---------+
| grp | id | name |
+--------+----+---------+
| fish | 1 | lax |
| mammal | 1 | dog |
| mammal | 2 | cat |
| mammal | 3 | whale |
| bird | 1 | penguin |
+--------+----+---------+
Note that in this case, the AUTO_INCREMENT value will be reused if you
delete the row with the biggest AUTO_INCREMENT value in any group.
mysql in Batch Mode
In the previous sections, you used mysql interactively to enter
queries and view the results. You can also run mysql in batch
mode. To do this, put the commands you want to run in a file, then
tell mysql to read its input from the file:
shell> mysql < batch-file
If you are running mysql under Windows and have some special
characters in the file that causes problems, you can do:
dos> mysql -e "source batch-file"
If you need to specify connection parameters on the command-line, the command might look like this:
shell> mysql -h host -u user -p < batch-file Enter password: ********
When you use mysql this way, you are creating a script file, then
executing the script.
If you want the script to continue even if you have errors, you should
use the --force command-line option.
Why use a script? Here are a few reasons:
mysql to execute it again.
shell> mysql < batch-file | more
shell> mysql < batch-file > mysql.out
cron job. In this case, you must use batch mode.
The default output format is different (more concise) when you run
mysql in batch mode than when you use it interactively. For
example, the output of SELECT DISTINCT species FROM pet looks like
this when run interactively:
+---------+ | species | +---------+ | bird | | cat | | dog | | hamster | | snake | +---------+
But like this when run in batch mode:
species bird cat dog hamster snake
If you want to get the interactive output format in batch mode, use
mysql -t. To echo to the output the commands that are executed, use
mysql -vvv.
You can also use scripts in the mysql command-line prompt by
using the source command:
mysql> source filename;
At Analytikerna and Lentus, we have been doing the systems and field work for a big research project. This project is a collaboration between the Institute of Environmental Medicine at Karolinska Institutet Stockholm and the Section on Clinical Research in Aging and Psychology at the University of Southern California.
The project involves a screening part where all twins in Sweden older than 65 years are interviewed by telephone. Twins who meet certain criteria are passed on to the next stage. In this latter stage, twins who want to participate are visited by a doctor/nurse team. Some of the examinations include physical and neuropsychological examination, laboratory testing, neuroimaging, psychological status assessment, and family history collection. In addition, data are collected on medical and environmental risk factors.
More information about Twin studies can be found at: http://www.imm.ki.se/TWIN/TWINUKW.HTM
The latter part of the project is administered with a web interface written using Perl and MySQL.
Each night all data from the interviews is moved into a MySQL database.
The following query is used to determine who goes into the second part of the project:
SELECT
CONCAT(p1.id, p1.tvab) + 0 AS tvid,
CONCAT(p1.christian_name, " ", p1.surname) AS Name,
p1.postal_code AS Code,
p1.city AS City,
pg.abrev AS Area,
IF(td.participation = "Aborted", "A", " ") AS A,
p1.dead AS dead1,
l.event AS event1,
td.suspect AS tsuspect1,
id.suspect AS isuspect1,
td.severe AS tsevere1,
id.severe AS isevere1,
p2.dead AS dead2,
l2.event AS event2,
h2.nurse AS nurse2,
h2.doctor AS doctor2,
td2.suspect AS tsuspect2,
id2.suspect AS isuspect2,
td2.severe AS tsevere2,
id2.severe AS isevere2,
l.finish_date
FROM
twin_project AS tp
/* For Twin 1 */
LEFT JOIN twin_data AS td ON tp.id = td.id
AND tp.tvab = td.tvab
LEFT JOIN informant_data AS id ON tp.id = id.id
AND tp.tvab = id.tvab
LEFT JOIN harmony AS h ON tp.id = h.id
AND tp.tvab = h.tvab
LEFT JOIN lentus AS l ON tp.id = l.id
AND tp.tvab = l.tvab
/* For Twin 2 */
LEFT JOIN twin_data AS td2 ON p2.id = td2.id
AND p2.tvab = td2.tvab
LEFT JOIN informant_data AS id2 ON p2.id = id2.id
AND p2.tvab = id2.tvab
LEFT JOIN harmony AS h2 ON p2.id = h2.id
AND p2.tvab = h2.tvab
LEFT JOIN lentus AS l2 ON p2.id = l2.id
AND p2.tvab = l2.tvab,
person_data AS p1,
person_data AS p2,
postal_groups AS pg
WHERE
/* p1 gets main twin and p2 gets his/her twin. */
/* ptvab is a field inverted from tvab */
p1.id = tp.id AND p1.tvab = tp.tvab AND
p2.id = p1.id AND p2.ptvab = p1.tvab AND
/* Just the sceening survey */
tp.survey_no = 5 AND
/* Skip if partner died before 65 but allow emigration (dead=9) */
(p2.dead = 0 OR p2.dead = 9 OR
(p2.dead = 1 AND
(p2.death_date = 0 OR
(((TO_DAYS(p2.death_date) - TO_DAYS(p2.birthday)) / 365)
>= 65))))
AND
(
/* Twin is suspect */
(td.future_contact = 'Yes' AND td.suspect = 2) OR
/* Twin is suspect - Informant is Blessed */
(td.future_contact = 'Yes' AND td.suspect = 1
AND id.suspect = 1) OR
/* No twin - Informant is Blessed */
(ISNULL(td.suspect) AND id.suspect = 1
AND id.future_contact = 'Yes') OR
/* Twin broken off - Informant is Blessed */
(td.participation = 'Aborted'
AND id.suspect = 1 AND id.future_contact = 'Yes') OR
/* Twin broken off - No inform - Have partner */
(td.participation = 'Aborted' AND ISNULL(id.suspect)
AND p2.dead = 0))
AND
l.event = 'Finished'
/* Get at area code */
AND SUBSTRING(p1.postal_code, 1, 2) = pg.code
/* Not already distributed */
AND (h.nurse IS NULL OR h.nurse=00 OR h.doctor=00)
/* Has not refused or been aborted */
AND NOT (h.status = 'Refused' OR h.status = 'Aborted'
OR h.status = 'Died' OR h.status = 'Other')
ORDER BY
tvid;
Some explanations:
CONCAT(p1.id, p1.tvab) + 0 AS tvid
id and tvab in
numerical order. Adding 0 to the result causes MySQL to
treat the result as a number.
id
tvab
1 or 2.
ptvab
tvab. When tvab is 1 this is
2, and vice versa. It exists to save typing and to make it easier for
MySQL to optimise the query.
This query demonstrates, among other things, how to do lookups on a
table from the same table with a join (p1 and p2). In the example, this
is used to check whether a twin's partner died before the age of 65. If so,
the row is not returned.
All of the above exist in all tables with twin-related information. We
have a key on both id,tvab (all tables), and id,ptvab
(person_data) to make queries faster.
On our production machine (A 200MHz UltraSPARC), this query returns about 150-200 rows and takes less than one second.
The current number of records in the tables used above:
| Table | Rows |
person_data | 71074 |
lentus | 5291 |
twin_project | 5286 |
twin_data | 2012 |
informant_data | 663 |
harmony | 381 |
postal_groups | 100 |
Each interview ends with a status code called event. The query
shown here is used to display a table over all twin pairs combined by
event. This indicates in how many pairs both twins are finished, in how many
pairs one twin is finished and the other refused, and so on.
SELECT
t1.event,
t2.event,
COUNT(*)
FROM
lentus AS t1,
lentus AS t2,
twin_project AS tp
WHERE
/* We are looking at one pair at a time */
t1.id = tp.id
AND t1.tvab=tp.tvab
AND t1.id = t2.id
/* Just the sceening survey */
AND tp.survey_no = 5
/* This makes each pair only appear once */
AND t1.tvab='1' AND t2.tvab='2'
GROUP BY
t1.event, t2.event;
There are programs that let you authenticate your users from a MySQL database and also let you log your log files into a MySQL table.
You can change the Apache logging format to be easily readable by MySQL by putting the following into the Apache configuration file:
LogFormat \
"\"%h\",%{%Y%m%d%H%M%S}t,%>s,\"%b\",\"%{Content-Type}o\", \
\"%U\",\"%{Referer}i\",\"%{User-Agent}i\""
In MySQL you can do a variation of:
LOAD DATA INFILE '/local/access_log' INTO TABLE table_name FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' ESCAPED BY '\\'
mysqld Command-line OptionsIn most cases you should manage mysqld options through option files. See section 4.1.2 `my.cnf' Option Files.
mysqld and mysqld.server reads options from the
mysqld and server groups. mysqld_safe read options
from the mysqld, server, mysqld_safe and
safe_mysqld groups. An embedded MySQL server usually reads
options from the server, embedded and
xxxxx_SERVER, where xxxxx is the name of the application.
mysqld accepts a lot of command-line options. Here follows some
of the most common ones. For a full list execute mysqld --help.
--ansi
-b, --basedir=path
--big-tables
--bind-address=IP
--console
--log-error
is specified. On Windows, mysqld will not close the console screen if
this option is used.
--character-sets-dir=path
--chroot=path
mysqld daemon in chroot environment at startup. Recommended
security measure since MySQL 4.0 (MySQL 3.23 is not able to provide
100% closed chroot jail).
It somewhat limits LOAD DATA INFILE and
SELECT ... INTO OUTFILE though.
--core-file
mysqld dies. For some systems you must also
specify --core-file-size to safe_mysqld.
See section 4.7.2 safe_mysqld, The Wrapper Around mysqld.
Note that on some systems, like Solaris, you will
not get a core file if you are also using the --user option.
-h, --datadir=path
--debug[...]=
--with-debug, you can use this
option to get a trace file of what mysqld is doing.
See section E.1.2 Creating Trace Files.
--default-character-set=charset
--default-table-type=type
--delay-key-write[= OFF | ON | ALL]
DELAYED KEYS should be used. See section 5.5.2 Tuning Server Parameters.
--delay-key-write-for-all-tables; In MySQL 4.0.3 you should use --delay-key-write=ALL instead.
MyISAM table.
See section 5.5.2 Tuning Server Parameters.
--des-key-file=filename
DES_ENCRYPT() and DES_DECRYPT()
from this file.
--enable-external-locking (was --enable-locking)
lockd does not fully work (as on Linux), you will easily get
mysqld to deadlock.
--enable-named-pipe
-T, --exit-info
--flush
-?, --help
--init-file=file
-L, --language=...
-l, --log[=file]
--log-bin=[file]
--log-bin-index[=file]
--log-error[=file]
--log-isam[=file]
--log-slow-queries[=file]
long_query_time seconds to
execute to file. See section 4.9.5 The Slow Query Log.
--log-update[=file]
file.# where # is a unique number if not
given. See section 4.9.3 The Update Log. The update log is deprecated and will be
removed in MySQL 5.0; you should use the binary log instead
(--log-bin). See section 4.9.4 The Binary Log. Starting from version 5.0,
using --log-update will just turn on the binlog instead.
--log-long-format
--log-slow-queries then queries that are not using indexes are logged
to the slow query log.
--low-priority-updates
INSERT/DELETE/UPDATE)
will have lower priority than selects. It can also be done via
{INSERT | REPLACE | UPDATE | DELETE} LOW_PRIORITY ... to lower
the priority of only one query, or by
SET LOW_PRIORITY_UPDATES=1 to change the priority in one
thread. See section 5.3.2 Table Locking Issues.
--memlock
mysqld process in memory. This works only if your
system supports the mlockall() system call (like Solaris). This
may help if you have a problem where the operating system is causing
mysqld to swap on disk.
--myisam-recover [=option[,option...]]]
DEFAULT, BACKUP, FORCE or QUICK. You can
also set this explicitly to "" if you want to disable this
option. If this option is used, mysqld will on open check if the
table is marked as crashed or if the table wasn't closed properly.
(The last option only works if you are running with
--skip-external-locking.) If this is the case mysqld will run
check on the table. If the table was corrupted, mysqld will
attempt to repair it.
The following options affects how the repair works.
| Option | Description |
| DEFAULT | The same as not giving any option to
--myisam-recover.
|
| BACKUP | If the data table was changed during recover, save a backup of the `table_name.MYD' datafile as `table_name-datetime.BAK'. |
| FORCE | Run recover even if we will lose more than one row from the .MYD file. |
| QUICK | Don't check the rows in the table if there aren't any delete blocks. |
BACKUP,FORCE. This will force a repair of a table even if some rows
would be deleted, but it will keep the old datafile as a backup so that
you can later examine what happened.
--new
--new option can be used to make the server
behave as 4.1 in certain aspects, easing a 4.0 to 4.1 upgrade:
TIMESTAMP is returned as string with the format
'YYYY-MM-DD HH:MM:DD'.
See section 6.2 Column Types.
--pid-file=path
safe_mysqld.
-P, --port=...
-o, --old-protocol
--one-thread
-O, --set-variable var=option
--help lists variables. You can find a full
description for all variables in the SHOW VARIABLES section in this
manual. See section 4.5.7.4 SHOW VARIABLES. The tuning server parameters section includes
information of how to optimise these. Please note that --set-variable
is deprecated since MySQL 4.0, just use --var=option on its own.
See section 5.5.2 Tuning Server Parameters.
In MySQL 4.0.2 one can set a variable directly with
--variable-name=option and set-variable is no longer needed
in option files.
If you want to restrict the maximum value a startup option can be set to
with SET, you can define this by using the
--maximum-variable-name command line option. See section 5.5.6 SET Syntax.
Note that when setting a variable to a value, MySQL may automatically
correct it to stay within a given range and also adjusts the value a
little to fix for the used algorithm.
--safe-mode
--safe-show-database
SHOW DATABASES command returns only those
databases for which the user has some kind of privilege.
From version 4.0.2 this option is deprecated and doesn't do anything
(the option is enabled by default) as we now have the
SHOW DATABASES privilege. See section 4.3.1 GRANT and REVOKE Syntax.
--safe-user-create
INSERT privilege to the
mysql.user table or any column in this table.
--skip-bdb
--skip-concurrent-insert
MyISAM
tables. (This is only to be used if you think you have found a bug in this
feature.)
--skip-delay-key-write; In MySQL 4.0.3 you should use --delay-key-write=OFF instead.
DELAY_KEY_WRITE option for all tables.
See section 5.5.2 Tuning Server Parameters.
--skip-grant-tables
mysqladmin
flush-privileges or mysqladmin reload.)
--skip-host-cache
--skip-innodb
--skip-external-locking (was --skip-locking)
isamchk or myisamchk you must
shut down the server. See section 1.2.3 How Stable Is MySQL?. Note that in MySQL Version
3.23 you can use REPAIR and CHECK to repair/check MyISAM
tables.
--skip-name-resolve
Host column values in the grant tables
must be IP numbers or localhost. See section 5.5.5 How MySQL uses DNS.
--skip-networking
mysqld must be made via Unix sockets. This option is highly
recommended for systems where only local requests are allowed. See section 5.5.5 How MySQL uses DNS.
--skip-new
--skip-symlink
--symbolic-links || --skip-symbolic-links
CREATE TABLE .... INDEX/DATA DIRECTORY="path-to-dir" command.
When you delete or rename a table, the file that the symbolic link points
to will also be deleted/renamed.
On Windows, this means that you can create a directory.sym file that
contains the path to the real directory. See section 2.6.2.5 Splitting Data Across Different Disks on Windows.
--skip-safemalloc
--with-debug=full, all programs
will check the memory for overruns for every memory allocation and memory
freeing. As this checking is very slow, you can avoid this, when you don't
need memory checking, by using this option.
--skip-show-database
SHOW DATABASES command, unless the user has the
SHOW DATABASES privilege. From version 4.0.2 you should no longer
need this option, since access can now be granted specifically with the
SHOW DATABASES privilege.
--skip-stack-trace
mysqld under a debugger. On some systems you also have to use
this option to get a core file. See section E.1 Debugging a MySQL server.
--skip-thread-priority
--socket=path
/tmp/mysql.sock.
--sql-mode=option[,option[,option...]]
REAL_AS_FLOAT,
PIPES_AS_CONCAT, ANSI_QUOTES, IGNORE_SPACE,
SERIALIZE, ONLY_FULL_GROUP_BY. It can also be empty
("") if you want to reset this.
By specifying all of the above options is same as using --ansi.
With this option one can turn on only needed SQL modes. See section 1.8.2 Running MySQL in ANSI Mode.
--temp-pool
--transaction-isolation= { READ-UNCOMMITTED | READ-COMMITTED | REPEATABLE-READ | SERIALIZABLE }
SET TRANSACTION Syntax.
-t, --tmpdir=path
/tmp
directory resides on a partition too small to hold temporary tables.
Starting from MySQL 4.1, this option accepts several paths separated
by colon : (semicolon ; on Windows). They will be used
in round-robin fashion.
-u, --user= [user_name | userid]
mysqld daemon as user user_name or userid (numeric).
This option is mandatory when starting mysqld as root.
Starting from MySQL 3.23.56 and 4.0.12:
To avoid a possible security hole where a user adds an --user=root
option to some `my.cnf' file, mysqld will only use the first
--user option specified and give a warning if there are multiple
options. Note that `/etc/my.cnf' and `datadir/my.cnf' may override
a command line option - therefore it is recommended to put this option in
`/etc/my.cnf'.
-V, --version
-W, --log-warnings (Was --warnings)
Aborted connection... to the
`.err' file. Enabling it is recommended if you use replication
for example (you will get more messages about what is happening,
like network failures and reconnections). See section A.2.9 Communication Errors / Aborted Connection.
One can change most values for a running server with the
SET command. See section 5.5.6 SET Syntax.
MySQL can, since Version 3.22, read default startup options for the server and for clients from option files.
MySQL reads default options from the following files on Unix:
| Filename | Purpose |
/etc/my.cnf | Global options |
DATADIR/my.cnf | Server-specific options |
defaults-extra-file | The file specified with --defaults-extra-file=# |
~/.my.cnf | User-specific options |
DATADIR is the MySQL data directory (typically
`/usr/local/mysql/data' for a binary installation or
`/usr/local/var' for a source installation). Note that this is the
directory that was specified at configuration time, not the one specified
with --datadir when mysqld starts up! (--datadir has no
effect on where the server looks for option files, because it looks for them
before it processes any command-line arguments.)
MySQL reads default options from the following files on Windows:
| Filename | Purpose |
windows-system-directory\my.ini | Global options |
C:\my.cnf | Global options |
Note that on Windows, you should specify all paths with / instead of
\. If you use \, you need to specify this twice, as
\ is the escape character in MySQL.
MySQL tries to read option files in the order listed above. If multiple option files exist, an option specified in a file read later takes precedence over the same option specified in a file read earlier. Options specified on the command-line take precedence over options specified in any option file. Some options can be specified using environment variables. Options specified on the command-line or in option files take precedence over environment variable values. See section F Environment Variables.
The following programs support option files: mysql,
mysqladmin, mysqld, mysqld_safe, mysql.server,
mysqldump, mysqlimport, mysqlshow, mysqlcheck,
myisamchk, and myisampack.
Any long option that may be given on the command-line when running a MySQL
program can be given in an option file as well (without the leading double
dash). Run the program with --help to get a list of available options.
An option file can contain lines of the following forms:
#comment
[group]
group is the name of the program or group for which you want to set
options. After a group line, any option or set-variable lines
apply to the named group until the end of the option file or another group
line is given.
option
--option on the command-line.
option=value
--option=value on the command-line.
set-variable = variable=value
--set-variable variable=value on the command-line.
This syntax must be used to set a mysqld variable.
Please note that --set-variable is deprecated since MySQL 4.0,
just use --variable=value on its own.
The client group allows you to specify options that apply to all
MySQL clients (not mysqld). This is the perfect group to use
to specify the password you use to connect to the server. (But make
sure the option file is readable and writable only by yourself.)
Note that for options and values, all leading and trailing blanks are automatically deleted. You may use the escape sequences `\b', `\t', `\n', `\r', `\\', and `\s' in your value string (`\s' == blank).
Here is a typical global option file:
[client] port=3306 socket=/tmp/mysql.sock [mysqld] port=3306 socket=/tmp/mysql.sock set-variable = key_buffer_size=16M set-variable = max_allowed_packet=1M [mysqldump] quick
Here is typical user option file:
[client] # The following password will be sent to all standard MySQL clients password=my_password [mysql] no-auto-rehash set-variable = connect_timeout=2 [mysqlhotcopy] interactive-timeout
If you have a source distribution, you will find sample configuration
files named `my-xxxx.cnf' in the `support-files' directory.
If you have a binary distribution, look in the `DIR/support-files'
directory, where DIR is the pathname to the MySQL
installation directory (typically `/usr/local/mysql'). Currently
there are sample configuration files for small, medium, large, and very
large systems. You can copy `my-xxxx.cnf' to your home directory
(rename the copy to `.my.cnf') to experiment with this.
All MySQL clients that support option files support the following options:
| Option | Description |
| --no-defaults | Don't read any option files. |
| --print-defaults | Print the program name and all options that it will get. |
| --defaults-file=full-path-to-default-file | Only use the given configuration file. |
| --defaults-extra-file=full-path-to-default-file | Read this configuration file after the global configuration file but before the user configuration file. |
Note that the above options must be first on the command-line to work!
--print-defaults may however be used directly after the
--defaults-xxx-file commands.
Note for developers: Option file handling is implemented simply by processing all matching options (that is, options in the appropriate group) before any command-line arguments. This works nicely for programs that use the last instance of an option that is specified multiple times. If you have an old program that handles multiply-specified options this way but doesn't read option files, you need add only two lines to give it that capability. Check the source code of any of the standard MySQL clients to see how to do this.
In shell scripts you can use the `my_print_defaults' command to parse the config files:
shell> my_print_defaults client mysql --port=3306 --socket=/tmp/mysql.sock --no-auto-rehash
The above output contains all options for the groups 'client' and 'mysql'.
In some cases you may want to have many different mysqld daemons
(servers) running on the same machine. You may for example want to run
a new version of MySQL for testing together with an old version
that is in production. Another case is when you want to give different
users access to different mysqld servers that they manage themselves.
One way to get a new server running is by starting it with a different socket and port as follows:
shell> MYSQL_UNIX_PORT=/tmp/mysqld-new.sock shell> MYSQL_TCP_PORT=3307 shell> export MYSQL_UNIX_PORT MYSQL_TCP_PORT shell> scripts/mysql_install_db shell> bin/safe_mysqld &
The environment variables appendix includes a list of other environment
variables you can use to affect mysqld. See section F Environment Variables.
The above is the quick and dirty way that one commonly uses for testing. The nice thing with this is that all connections you do in the above shell will automatically be directed to the new running server!
If you need to do this more permanently, you should create an option file for each server. See section 4.1.2 `my.cnf' Option Files. In your startup script that is executed at boot time you should specify for both servers:
safe_mysqld --defaults-file=path-to-option-file
At least the following options should be different per server:
The following options should be different, if they are used:
If you want more performance, you can also specify the following differently:
See section 4.1.1 mysqld Command-line Options.
Starting from MySQL 4.1, tmpdir can be set to a list of paths
separated by colon : (semicolon ; on Windows). They
will be used in round-robin fashion. This feature can be used to
spread load between several physical disks.
If you are installing binary MySQL versions (.tar files) and
start them with ./bin/safe_mysqld then in most cases the only
option you need to add/change is the socket and port
argument to safe_mysqld.
See section 4.1.4 Running Multiple MySQL Servers on the Same Machine.
There are circumstances when you might want to run multiple servers on the same machine. For example, you might want to test a new MySQL release while leaving your existing production setup undisturbed. Or you might be an Internet service provider that wants to provide independent MySQL installations for different customers.
If you want to run multiple servers, the easiest way is to compile the servers
with different TCP/IP ports and socket files so they are not
both listening to the same TCP/IP port or socket file. See section 4.7.3 mysqld_multi, A Program for Managing Multiple MySQL Servers.
Assume an existing server is configured for the default port number and
socket file. Then configure the new server with a configure command
something like this:
shell> ./configure --with-tcp-port=port_number \
--with-unix-socket-path=file_name \
--prefix=/usr/local/mysql-3.22.9
Here port_number and file_name should be different from the
default port number and socket file pathname, and the --prefix value
should specify an installation directory different from the one under which
the existing MySQL installation is located.
You can check the socket used by any currently executing MySQL server with this command:
shell> mysqladmin -h hostname --port=port_number variables
Note that if you specify ``localhost'' as a hostname, mysqladmin
will default to using Unix sockets instead of TCP/IP.
In MySQL 4.1 you can also specify the protocol to use by using the
--protocol=(TCP | SOCKET | PIPE | MEMORY) option.
If you have a MySQL server running on the port you used, you will get a list of some of the most important configurable variables in MySQL, including the socket name.
You don't have to recompile a new MySQL server just to start with
a different port and socket. You can change the port and socket to be used
by specifying them at runtime as options to safe_mysqld:
shell> /path/to/safe_mysqld --socket=file_name --port=port_number
mysqld_multi can also take safe_mysqld (or mysqld)
as an argument and pass the options from a configuration file to
safe_mysqld and further to mysqld.
If you run the new server on the same database directory as another
server with logging enabled, you should also specify the name of the log
files to safe_mysqld with --log, --log-update,
--log-bin or --log-slow-queries. Otherwise, both
servers may be trying to write to the same log file.
Warning: normally you should never have two servers that update data in the same database! If your OS doesn't support fault-free system locking, this may lead to unpleasant surprises!
If you want to use another database directory for the second server, you
can use the --datadir=path option to safe_mysqld.
Note also that starting several MySQL servers
(mysqlds) in different machines and letting them access one data
directory over NFS is generally a bad idea! The problem
is that the NFS will become the bottleneck with the speed. It is
not meant for such use. And last but not least, you would still have to
come up with a solution how to make sure that two or more mysqlds
are not interfering with each other. At the moment there is no platform
that would 100% reliable do the file locking (lockd daemon
usually) in every situation. Yet there would be one more possible risk
with NFS; it would make the work even more complicated for
lockd daemon to handle. So make it easy for your self and forget
about the idea. The working solution is to have one computer with an
operating system that efficiently handles threads and have several CPUs
in it.
When you want to connect to a MySQL server that is running with a different port than the port that is compiled into your client, you can use one of the following methods:
--host 'hostname' --port=port_number to connect
with TCP/IP, or [--host localhost] --socket=file_name to connect via
a Unix socket.
--protocol=tcp to connect with TCP/IP and
--protocol=socket to connect via a Unix socket.
DBD::mysql module you can read the options
from the MySQL option files. See section 4.1.2 `my.cnf' Option Files.
$dsn = "DBI:mysql:test;mysql_read_default_group=client;
mysql_read_default_file=/usr/local/mysql/data/my.cnf"
$dbh = DBI->connect($dsn, $user, $password);
MYSQL_UNIX_PORT and MYSQL_TCP_PORT environment variables
to point to the Unix socket and TCP/IP port before you start your clients.
If you normally use a specific socket or port, you should place commands
to set these environment variables in your `.login' file.
See section F Environment Variables.
MySQL has an advanced but non-standard security/privilege system. This section describes how it works.
Anyone using MySQL on a computer connected to the Internet should read this section to avoid the most common security mistakes.
In discussing security, we emphasise the necessity of fully protecting the entire server host (not simply the MySQL server) against all types of applicable attacks: eavesdropping, altering, playback, and denial of service. We do not cover all aspects of availability and fault tolerance here.
MySQL uses security based on Access Control Lists (ACLs) for all connections, queries, and other operations that a user may attempt to perform. There is also some support for SSL-encrypted connections between MySQL clients and servers. Many of the concepts discussed here are not specific to MySQL at all; the same general ideas apply to almost all applications.
When running MySQL, follow these guidelines whenever possible:
user table in the mysql database! This is critical.
The encrypted password is the real password in MySQL.
Anyone who knows the password which is listed in the user table
and has access to the host listed for the account can easily log
in as that user.
GRANT and
REVOKE commands are used for controlling access to MySQL. Do
not grant any more privileges than necessary. Never grant privileges to all
hosts.
Checklist:
mysql -u root. If you are able to connect successfully to the
server without being asked for a password, you have problems. Anyone
can connect to your MySQL server as the MySQL
root user with full privileges!
Review the MySQL installation instructions, paying particular
attention to the item about setting a root password.
SHOW GRANTS and check to see who has access to
what. Remove those privileges that are not necessary using the REVOKE
command.
MD5(), SHA1() or
another one-way hashing function.
nmap. MySQL uses port 3306 by default. This port should
be inaccessible from untrusted hosts. Another simple way to check whether
or not your MySQL port is open is to try the following command
from some remote machine, where server_host is the hostname of
your MySQL server:
shell> telnet server_host 3306If you get a connection and some garbage characters, the port is open, and should be closed on your firewall or router, unless you really have a good reason to keep it open. If
telnet just hangs or the
connection is refused, everything is OK; the port is blocked.
; DROP
DATABASE mysql;''. This is an extreme example, but large security leaks
and data loss may occur as a result of hackers using similar techniques,
if you do not prepare for them.
Also remember to check numeric data. A common mistake is to protect only
strings. Sometimes people think that if a database contains only publicly
available data that it need not be protected. This is incorrect. At least
denial-of-service type attacks can be performed on such
databases. The simplest way to protect from this type of attack is to use
apostrophes around the numeric constants: SELECT * FROM table
WHERE ID='234' rather than SELECT * FROM table WHERE ID=234.
MySQL automatically converts this string to a number and
strips all non-numeric symbols from it.
Checklist:
%22 (`"'), %23
(`#'), and %27 (`'') in the URL.
addslashes() function.
As of PHP 4.0.3, a mysql_escape_string() function is available
that is based on the function of the same name in the MySQL C API.
mysql_real_escape_string() API call.
escape and quote modifiers for query streams.
quote() method or use placeholders.
PreparedStatement object and placeholders.
tcpdump and strings utilities. For most cases,
you can check whether MySQL data streams are unencrypted
by issuing a command like the following:
shell> tcpdump -l -i eth0 -w - src or dst port 3306 | strings(This works under Linux and should work with small modifications under other systems.) Warning: If you do not see data this doesn't always actually mean that it is encrypted. If you need high security, you should consult with a security expert.
When you connect to a MySQL server, you normally should use a password. The password is not transmitted in clear text over the connection, however the encryption algorithm is not very strong, and with some effort a clever attacker can crack the password if he is able to sniff the traffic between the client and the server. If the connection between the client and the server goes through an untrusted network, you should use an SSH tunnel to encrypt the communication.
All other information is transferred as text that can be read by anyone
who is able to watch the connection. If you are concerned about this,
you can use the compressed protocol (in MySQL Version 3.22 and above)
to make things much harder. To make things even more secure you should use
ssh. You can find an Open Source ssh client at
http://www.openssh.org/, and a commercial ssh client at
http://www.ssh.com/. With this, you can get an encrypted TCP/IP
connection between a MySQL server and a MySQL client.
If you are using MySQL 4.0, you can also use internal OpenSSL support. See section 4.3.9 Using Secure Connections.
To make a MySQL system secure, you should strongly consider the following suggestions:
mysql -u other_user db_name if
other_user has no password. It is common behaviour with client/server
applications that the client may specify any user name. You can change the
password of all users by editing the mysql_install_db script before
you run it, or only the password for the MySQL root user like
this:
shell> mysql -u root mysql
mysql> UPDATE user SET Password=PASSWORD('new_password')
-> WHERE user='root';
mysql> FLUSH PRIVILEGES;
root user. This is
very dangerous, because any user with the FILE privilege will be able
to create files as root (for example, ~root/.bashrc). To
prevent this, mysqld will refuse to run as root unless it
is specified directly using a --user=root option.
mysqld can be run as an ordinary unprivileged user instead.
You can also create a new Unix user mysql to make everything
even more secure. If you run mysqld as another Unix user,
you don't need to change the root user name in the user
table, because MySQL user names have nothing to do with Unix
user names. To start mysqld as another Unix user, add a user
line that specifies the user name to the [mysqld] group of the
`/etc/my.cnf' option file or the `my.cnf' option file in the
server's data directory. For example:
[mysqld] user=mysqlThis will cause the server to start as the designated user whether you start it manually or by using
safe_mysqld or mysql.server.
For more details, see section A.3.2 How to Run MySQL As a Normal User.
--skip-symlink option). This is especially important if you run
mysqld as root as anyone that has write access to the mysqld data
directories could then delete any file in the system!
See section 5.6.1.2 Using Symbolic Links for Tables.
mysqld runs as is the only user with
read/write privileges in the database directories.
PROCESS privilege to all users. The output of
mysqladmin processlist shows the text of the currently executing
queries, so any user who is allowed to execute that command might be able to
see if another user issues an UPDATE user SET
password=PASSWORD('not_secure') query.
mysqld reserves an extra connection for users who have the
PROCESS privilege, so that a MySQL root user can log
in and check things even if all normal connections are in use.
FILE privilege to all users. Any user that has this
privilege can write a file anywhere in the filesystem with the privileges of
the mysqld daemon! To make this a bit safer, all files generated with
SELECT ... INTO OUTFILE are writeable by everyone, and you cannot
overwrite existing files.
The FILE privilege may also be used to read any world readable
file that is accessible to the Unix user that the server runs as. One can also
read any file to the current database (which the user need some privilege for).
This could be abused, for example, by using LOAD DATA to load
`/etc/passwd' into a table, which can then be read with
SELECT.
max_user_connections variable in
mysqld.
mysqld Concerning Security
The following mysqld options affect security:
--local-infile[=(0|1)]
--local-infile=0 then one can't use LOAD DATA LOCAL
INFILE.
--safe-show-database
SHOW DATABASES command returns only those
databases for which the user has some kind of privilege.
From version 4.0.2 this option is deprecated and doesn't do anything
(the option is enabled by default) as we now have the
SHOW DATABASES privilege. See section 4.3.1 GRANT and REVOKE Syntax.
--safe-user-create
GRANT
command, if the user doesn't have the INSERT privilege for the
mysql.user table. If you want to give a user access to just create
new users with those privileges that the user has right to grant, you should
give the user the following privilege:
mysql> GRANT INSERT(user) ON mysql.user TO 'user'@'hostname';This will ensure that the user can't change any privilege columns directly, but has to use the
GRANT command to give privileges to other users.
--skip-grant-tables
mysqladmin
flush-privileges or mysqladmin reload.)
--skip-name-resolve
Host column values in the grant
tables must be IP numbers or localhost.
--skip-networking
mysqld must be made via Unix sockets.
This option is unsuitable when using a MySQL version prior to 3.23.27 with
the MIT-pthreads package, because Unix sockets were not supported by
MIT-pthreads at that time.
--skip-show-database
SHOW DATABASES command, unless the user has the
SHOW DATABASES privilege. From version 4.0.2 you should no longer
need this option, since access can now be granted specifically with the
SHOW DATABASES privilege.
In MySQL 3.23.49 and MySQL 4.0.2, we added some new options to deal with
possible security issues when it comes to LOAD DATA LOCAL.
There are two possible problems with supporting this command:
As the reading of the file is initiated from the server, one could theoretically create a patched MySQL server that could read any file on the client machine that the current user has read access to, when the client issues a query against the table.
In a web environment where the clients are connecting from a web
server, a user could use LOAD DATA LOCAL to read any files
that the web server process has read access to (assuming a user could
run any command against the SQL server).
There are two separate fixes for this:
If you don't configure MySQL with --enable-local-infile, then
LOAD DATA LOCAL will be disabled by all clients, unless one
calls mysql_options(... MYSQL_OPT_LOCAL_INFILE, 0) in the client.
See section 8.1.3.163 mysql_options().
For the mysql command-line client, LOAD DATA LOCAL can be
enabled by specifying the option --local-infile[=1], or disabled
with --local-infile=0.
By default, all MySQL clients and libraries are compiled with
--enable-local-infile, to be compatible with MySQL 3.23.48 and
before.
One can disable all LOAD DATA LOCAL commands in the MySQL server
by starting mysqld with --local-infile=0.
In the case that LOAD DATA LOCAL INFILE is disabled in the server or
the client, you will get the error message (1148):
The used command is not allowed with this MySQL version
The primary function of the MySQL privilege system is to
authenticate a user connecting from a given host, and to associate that user
with privileges on a database such as
SELECT, INSERT, UPDATE and DELETE.
Additional functionality includes the ability to have an anonymous user and
to grant privileges for MySQL-specific functions such as LOAD
DATA INFILE and administrative operations.
The MySQL privilege system ensures that all users may do exactly the things that they are supposed to be allowed to do. When you connect to a MySQL server, your identity is determined by the host from which you connect and the user name you specify. The system grants privileges according to your identity and what you want to do.
MySQL considers both your hostname and user name in identifying you
because there is little reason to assume that a given user name belongs to
the same person everywhere on the Internet. For example, the user
joe who connects from office.com need not be the same
person as the user joe who connects from elsewhere.com.
MySQL handles this by allowing you to distinguish users on different
hosts that happen to have the same name: you can grant joe one set
of privileges for connections from office.com, and a different set
of privileges for connections from elsewhere.com.
MySQL access control involves two stages:
SELECT
privilege for the table or the DROP privilege for the database.
The server uses the user, db, and host tables in the
mysql database at both stages of access control. The fields in these
grant tables are shown here:
| Table name | user | db | host |
| Scope fields | Host | Host | Host
|
User | Db | Db
| |
Password | User | ||
| Privilege fields | Select_priv | Select_priv | Select_priv
|
Insert_priv | Insert_priv | Insert_priv
| |
Update_priv | Update_priv | Update_priv
| |
Delete_priv | Delete_priv | Delete_priv
| |
Index_priv | Index_priv | Index_priv
| |
Alter_priv | Alter_priv | Alter_priv
| |
Create_priv | Create_priv | Create_priv
| |
Drop_priv | Drop_priv | Drop_priv
| |
Grant_priv | Grant_priv | Grant_priv
| |
References_priv | |||
Reload_priv | |||
Shutdown_priv | |||
Process_priv | |||
File_priv | |||
Show_db_priv | |||
Super_priv | |||
Create_tmp_table_priv | Create_tmp_table_priv | Create_tmp_table_priv
| |
Lock_tables_priv | Lock_tables_priv | Lock_tables_priv
| |
Execute_priv | |||
Repl_slave_priv | |||
Repl_client_priv | |||
ssl_type | |||
ssl_cypher | |||
x509_issuer | |||
x509_cubject | |||
max_questions | |||
max_updates | |||
max_connections |
For the second stage of access control (request verification), the server
may, if the request involves tables, additionally consult the
tables_priv and columns_priv tables. The fields in these
tables are shown here:
| Table name | tables_priv | columns_priv |
| Scope fields | Host | Host
|
Db | Db
| |
User | User
| |
Table_name | Table_name
| |
Column_name
| ||
| Privilege fields | Table_priv | Column_priv
|
Column_priv | ||
| Other fields | Timestamp | Timestamp
|
Grantor |
Each grant table contains scope fields and privilege fields.
Scope fields determine the scope of each entry in the tables, that is, the
context in which the entry applies. For example, a user table entry
with Host and User values of 'thomas.loc.gov' and
'bob' would be used for authenticating connections made to the server
by bob from the host thomas.loc.gov. Similarly, a db
table entry with Host, User, and Db fields of
'thomas.loc.gov', 'bob' and 'reports' would be used when
bob connects from the host thomas.loc.gov to access the
reports database. The tables_priv and columns_priv
tables contain scope fields indicating tables or table/column combinations
to which each entry applies.
For access-checking purposes, comparisons of Host values are
case-insensitive. User, Password, Db, and
Table_name values are case-sensitive.
Column_name values are case-insensitive in MySQL Version
3.22.12 or later.
Privilege fields indicate the privileges granted by a table entry, that is, what operations can be performed. The server combines the information in the various grant tables to form a complete description of a user's privileges. The rules used to do this are described in section 4.2.10 Access Control, Stage 2: Request Verification.
Scope fields are strings, declared as shown here; the default value for each is the empty string:
| Field name | Type | Notes |
Host | CHAR(60) | |
User | CHAR(16) | |
Password | CHAR(16) | |
Db | CHAR(64) | (CHAR(60) for the
tables_priv and columns_priv tables)
|
Table_name | CHAR(60) | |
Column_name | CHAR(60) |
In the user, db and host tables,
all privilege fields are declared as ENUM('N','Y')@-each can have a
value of 'N' or 'Y', and the default value is 'N'.
In the tables_priv and columns_priv tables, the privilege
fields are declared as SET fields:
| Table name | Field name | Possible set elements |
tables_priv
| Table_priv
| 'Select', 'Insert', 'Update', 'Delete', 'Create', 'Drop', 'Grant', 'References', 'Index', 'Alter'
|
tables_priv
| Column_priv
| 'Select', 'Insert', 'Update', 'References'
|
columns_priv
| Column_priv
| 'Select', 'Insert', 'Update', 'References'
|
Briefly, the server uses the grant tables like this:
user table scope fields determine whether to allow or reject
incoming connections. For allowed connections, any privileges granted in
the user table indicate the user's global (superuser) privileges.
These privileges apply to all databases on the server.
db and host tables are used together:
db table scope fields determine which users can access which
databases from which hosts. The privilege fields determine which operations
are allowed.
host table is used as an extension of the db table when you
want a given db table entry to apply to several hosts. For example,
if you want a user to be able to use a database from several hosts in
your network, leave the Host value empty in the user's db table
entry, then populate the host table with an entry for each of those
hosts. This mechanism is described more detail in section 4.2.10 Access Control, Stage 2: Request Verification.
tables_priv and columns_priv tables are similar to
the db table, but are more fine-grained: they apply at the
table and column levels rather than at the database level.
Note that administrative privileges (RELOAD, SHUTDOWN,
etc.) are specified only in the user table. This is because
administrative operations are operations on the server itself and are not
database-specific, so there is no reason to list such privileges in the
other grant tables. In fact, only the user table need
be consulted to determine whether you can perform an administrative
operation.
The FILE privilege is specified only in the user table, too.
It is not an administrative privilege as such, but your ability to read or
write files on the server host is independent of the database you are
accessing.
The mysqld server reads the contents of the grant tables once, when it
starts up. Changes to the grant tables take effect as indicated in
section 4.3.3 When Privilege Changes Take Effect.
When you modify the contents of the grant tables, it is a good idea to make
sure that your changes set up privileges the way you want. For help in
diagnosing problems, see section 4.2.11 Causes of Access denied Errors. For advice on security issues,
see section 4.2.2 How to Make MySQL Secure Against Crackers.
A useful
diagnostic tool is the mysqlaccess script, which Yves Carlier has
provided for the MySQL distribution. Invoke mysqlaccess with
the --help option to find out how it works.
Note that mysqlaccess checks access using only the user,
db and host tables. It does not check table- or column-level
privileges.
Information about user privileges is stored in the user, db,
host, tables_priv, and columns_priv tables in the
mysql database (that is, in the database named mysql). The
MySQL server reads the contents of these tables when it starts up
and under the circumstances indicated in section 4.3.3 When Privilege Changes Take Effect.
The names used in this manual to refer to the privileges provided by MySQL version 4.0.2 are shown here, along with the table column name associated with each privilege in the grant tables and the context in which the privilege applies:
| Privilege | Column | Context |
ALTER | Alter_priv | tables |
DELETE | Delete_priv | tables |
INDEX | Index_priv | tables |
INSERT | Insert_priv | tables |
SELECT | Select_priv | tables |
UPDATE | Update_priv | tables |
CREATE | Create_priv | databases, tables, or indexes |
DROP | Drop_priv | databases or tables |
GRANT | Grant_priv | databases or tables |
REFERENCES | References_priv | databases or tables |
CREATE TEMPORARY TABLES | Create_tmp_table_priv | server administration |
EXECUTE | Execute_priv | server administration |
FILE | File_priv | file access on server |
LOCK TABLES | Lock_tables_priv | server administration |
PROCESS | Process_priv | server administration |
RELOAD | Reload_priv | server administration |
REPLICATION CLIENT | Repl_client_priv | server administration |
REPLICATION SLAVE | Repl_slave_priv | server administration |
SHOW DATABASES | Show_db_priv | server administration |
SHUTDOWN | Shutdown_priv | server administration |
SUPER | Super_priv | server administration |
The SELECT, INSERT, UPDATE, and DELETE
privileges allow you to perform operations on rows in existing tables in
a database.
SELECT statements require the SELECT privilege only if they
actually retrieve rows from a table. You can execute certain SELECT
statements even without permission to access any of the databases on the
server. For example, you could use the mysql client as a simple
calculator:
mysql> SELECT 1+1; mysql> SELECT PI()*2;
The INDEX privilege allows you to create or drop (remove) indexes.
The ALTER privilege allows you to use ALTER TABLE.
The CREATE and DROP privileges allow you to create new
databases and tables, or to drop (remove) existing databases and tables.
Note that if you grant the DROP privilege for the mysql
database to a user, that user can drop the database in which the
MySQL access privileges are stored!
The GRANT privilege allows you to give to other users those
privileges you yourself possess.
The FILE privilege gives you permission to read and write files on
the server using the LOAD DATA INFILE and SELECT ... INTO
OUTFILE statements. Any user to whom this privilege is granted can read
any world readable file accessable by the MySQL server and create a new
world readable file in any directory where the MySQL server can write.
The user can also read any file in the current database directory.
The user can however not change any existing file.
The remaining privileges are used for administrative operations, which are
performed using the mysqladmin program. The table here shows which
mysqladmin commands each administrative privilege allows you to
execute:
| Privilege | Commands permitted to privilege holders |
RELOAD | reload, refresh,
flush-privileges, flush-hosts, flush-logs, and
flush-tables
|
SHUTDOWN | shutdown
|
PROCESS | processlist
|
SUPER | kill
|
The reload command tells the server to re-read the grant tables. The
refresh command flushes all tables and opens and closes the log
files. flush-privileges is a synonym for reload. The other
flush-* commands perform functions similar to refresh but are
more limited in scope, and may be preferable in some instances. For example,
if you want to flush just the log files, flush-logs is a better choice
than refresh.
The shutdown command shuts down the server.
The processlist command displays information about the threads
executing within the server. The kill command kills server
threads. You can always display or kill your own threads, but you need
the PROCESS privilege to display and SUPER privilege to
kill threads initiated by other users. See section 4.5.6 KILL Syntax.
It is a good idea in general to grant privileges only to those users who need them, but you should exercise particular caution in granting certain privileges:
GRANT privilege allows users to give away their privileges to
other users. Two users with different privileges and with the GRANT
privilege are able to combine privileges.
ALTER privilege may be used to subvert the privilege system
by renaming tables.
FILE privilege can be abused to read any world-readable file
on the server or any file in the current database directory on the
server into a database table, the contents of which can then be accessed
using SELECT.
SHUTDOWN privilege can be abused to deny service to other
users entirely, by terminating the server.
PROCESS privilege can be used to view the plain text of
currently executing queries, including queries that set or change passwords.
mysql database can be used to change passwords
and other access privilege information. (Passwords are stored
encrypted, so a malicious user cannot simply read them to know the plain
text password.) If they can access the mysql.user password
column, they can use it to log into the MySQL server
for the given user. (With sufficient privileges, the same user can
replace a password with a different one.)
There are some things that you cannot do with the MySQL privilege system:
MySQL client programs generally require that you specify connection
parameters when you want to access a MySQL server: the host you want
to connect to, your user name, and your password. For example, the
mysql client can be started like this (optional arguments are enclosed
between `[' and `]'):
shell> mysql [-h host_name] [-u user_name] [-pyour_pass]
Alternate forms of the -h, -u, and -p options are
--host=host_name, --user=user_name, and
--password=your_pass. Note that there is no space between
-p or --password= and the password following it.
Note: Specifying a password on the command-line is not secure!
Any user on your system may then find out your password by typing a command
like: ps auxww. See section 4.1.2 `my.cnf' Option Files.
mysql uses default values for connection parameters that are missing
from the command-line:
localhost.
-p is missing.
Thus, for a Unix user joe, the following commands are equivalent:
shell> mysql -h localhost -u joe shell> mysql -h localhost shell> mysql -u joe shell> mysql
Other MySQL clients behave similarly.
On Unix systems, you can specify different default values to be used when you make a connection, so that you need not enter them on the command-line each time you invoke a client program. This can be done in a couple of ways:
[client] section of the
`.my.cnf' configuration file in your home directory. The relevant
section of the file might look like this:
[client] host=host_name user=user_name password=your_passSee section 4.1.2 `my.cnf' Option Files.
mysql using MYSQL_HOST. The
MySQL user name can be specified using USER (this is for
Windows only). The password can be specified using MYSQL_PWD
(but this is insecure; see the next section). See section F Environment Variables.
When you attempt to connect to a MySQL server, the server accepts or rejects the connection based on your identity and whether you can verify your identity by supplying the correct password. If not, the server denies access to you completely. Otherwise, the server accepts the connection, then enters Stage 2 and waits for requests.
Your identity is based on two pieces of information:
Identity checking is performed using the three user table scope fields
(Host, User, and Password). The server accepts the
connection only if a user table entry matches your hostname and user
name, and you supply the correct password.
Values in the user table scope fields may be specified as follows:
Host value may be a hostname or an IP number, or 'localhost'
to indicate the local host.
Host
field.
Host value of '%' matches any hostname.
Host value means that the privilege should be anded
with the entry in the host table that matches the given host name.
You can find more information about this in the next chapter.
Host values specified as
IP numbers, you can specify a netmask indicating how many address bits to
use for the network number. For example:
mysql> GRANT ALL PRIVILEGES ON db.*
-> TO david@'192.58.197.0/255.255.255.0';
This will allow everyone to connect from an IP where the following is true:
user_ip & netmask = host_ip.In the above example all IP:s in the interval 192.58.197.0 - 192.58.197.255 can connect to the MySQL server.
User field, but you can
specify a blank value, which matches any name. If the user table
entry that matches an incoming connection has a blank user name, the user is
considered to be the anonymous user (the user with no name), rather than the
name that the client actually specified. This means that a blank user name
is used for all further access checking for the duration of the connection
(that is, during Stage 2).
Password field can be blank. This does not mean that any password
matches, it means the user must connect without specifying a password.
Non-blank Password values represent encrypted passwords.
MySQL does not store passwords in plaintext form for anyone to
see. Rather, the password supplied by a user who is attempting to
connect is encrypted (using the PASSWORD() function). The
encrypted password is then used when the client/server is checking if
the password is correct. (This is done without the encrypted password
ever traveling over the connection.) Note that from MySQL's
point of view the encrypted password is the REAL password, so you should
not give anyone access to it! In particular, don't give normal users
read access to the tables in the mysql database!
From version 4.1, MySQL employs a different password and login mechanism
that is secure even if TCP/IP packets are sniffed and/or the mysql database
is captured.
The examples here show how various combinations of Host and
User values in user table entries apply to incoming
connections:
Host value | User value | Connections matched by entry |
'thomas.loc.gov' | 'fred' | fred, connecting from thomas.loc.gov
|
'thomas.loc.gov' | '' | Any user, connecting from thomas.loc.gov
|
'%' | 'fred' | fred, connecting from any host
|
'%' | '' | Any user, connecting from any host |
'%.loc.gov' | 'fred' | fred, connecting from any host in the loc.gov domain
|
'x.y.%' | 'fred' | fred, connecting from x.y.net, x.y.com,x.y.edu, etc. (this is probably not useful)
|
'144.155.166.177' | 'fred' | fred, connecting from the host with IP address 144.155.166.177
|
'144.155.166.%' | 'fred' | fred, connecting from any host in the 144.155.166 class C subnet
|
'144.155.166.0/255.255.255.0' | 'fred' | Same as previous example |
Because you can use IP wildcard values in the Host field (for example,
'144.155.166.%' to match every host on a subnet), there is the
possibility that someone might try to exploit this capability by naming a
host 144.155.166.somewhere.com. To foil such attempts, MySQL
disallows matching on hostnames that start with digits and a dot. Thus, if
you have a host named something like 1.2.foo.com, its name will never
match the Host column of the grant tables. Only an IP number can
match an IP wildcard value.
An incoming connection may be matched by more than one entry in the
user table. For example, a connection from thomas.loc.gov by
fred would be matched by several of the entries just shown above. How
does the server choose which entry to use if more than one matches? The
server resolves this question by sorting the user table after reading
it at startup time, then looking through the entries in sorted order when a
user attempts to connect. The first matching entry is the one that is used.
user table sorting works as follows. Suppose the user table
looks like this:
+-----------+----------+- | Host | User | ... +-----------+----------+- | % | root | ... | % | jeffrey | ... | localhost | root | ... | localhost | | ... +-----------+----------+-
When the server reads in the table, it orders the entries with the
most-specific Host values first ('%' in the Host column
means ``any host'' and is least specific). Entries with the same Host
value are ordered with the most-specific User values first (a blank
User value means ``any user'' and is least specific). The resulting
sorted user table looks like this:
+-----------+----------+- | Host | User | ... +-----------+----------+- | localhost | root | ... | localhost | | ... | % | jeffrey | ... | % | root | ... +-----------+----------+-
When a connection is attempted, the server looks through the sorted entries
and uses the first match found. For a connection from localhost by
jeffrey, the entries with 'localhost' in the Host column
match first. Of those, the entry with the blank user name matches both the
connecting hostname and user name. (The '%'/'jeffrey' entry would
have matched, too, but it is not the first match in the table.)
Here is another example. Suppose the user table looks like this:
+----------------+----------+- | Host | User | ... +----------------+----------+- | % | jeffrey | ... | thomas.loc.gov | | ... +----------------+----------+-
The sorted table looks like this:
+----------------+----------+- | Host | User | ... +----------------+----------+- | thomas.loc.gov | | ... | % | jeffrey | ... +----------------+----------+-
A connection from thomas.loc.gov by jeffrey is matched by the
first entry, whereas a connection from whitehouse.gov by
jeffrey is matched by the second.
A common misconception is to think that for a given user name, all entries
that explicitly name that user will be used first when the server attempts to
find a match for the connection. This is simply not true. The previous
example illustrates this, where a connection from thomas.loc.gov by
jeffrey is first matched not by the entry containing 'jeffrey'
as the User field value, but by the entry with no user name!
If you have problems connecting to the server, print out the user
table and sort it by hand to see where the first match is being made.
If connection was successful, but your privileges are not what you
expected you may use CURRENT_USER() function (new in version
4.0.6) to see what user/host combination your connection actually
matched. See section 6.3.6.2 Miscellaneous Functions.
Once you establish a connection, the server enters Stage 2. For each request
that comes in on the connection, the server checks whether you have
sufficient privileges to perform it, based on the type of operation you wish
to perform. This is where the privilege fields in the grant tables come into
play. These privileges can come from any of the user, db,
host, tables_priv, or columns_priv tables. The grant
tables are manipulated with GRANT and REVOKE commands.
See section 4.3.1 GRANT and REVOKE Syntax. (You may find it helpful to refer to
section 4.2.6 How the Privilege System Works, which lists the fields present in each of the grant
tables.)
The user table grants privileges that are assigned to you on a global
basis and that apply no matter what the current database is. For example, if
the user table grants you the DELETE privilege, you can
delete rows from any database on the server host! In other words,
user table privileges are superuser privileges. It is wise to grant
privileges in the user table only to superusers such as server or
database administrators. For other users, you should leave the privileges
in the user table set to 'N' and grant privileges on a
database-specific basis only, using the db and host tables.
The db and host tables grant database-specific privileges.
Values in the scope fields may be specified as follows:
Host
and Db fields of either table. If you wish to use for instance a
`_' character as part of a database name, specify it as `\_' in
the GRANT command.
'%' Host value in the db table means ``any host.'' A
blank Host value in the db table means ``consult the
host table for further information.''
'%' or blank Host value in the host table means ``any
host.''
'%' or blank Db value in either table means ``any database.''
User value in either table matches the anonymous user.
The db and host tables are read in and sorted when the server
starts up (at the same time that it reads the user table). The
db table is sorted on the Host, Db, and User scope
fields, and the host table is sorted on the Host and Db
scope fields. As with the user table, sorting puts the most-specific
values first and least-specific values last, and when the server looks for
matching entries, it uses the first match that it finds.
The tables_priv and columns_priv tables grant table- and
column-specific privileges. Values in the scope fields may be specified as
follows:
Host field of either table.
'%' or blank Host value in either table means ``any host.''
Db, Table_name and Column_name fields cannot contain
wildcards or be blank in either table.
The tables_priv and columns_priv tables are sorted on
the Host, Db, and User fields. This is similar to
db table sorting, although the sorting is simpler because
only the Host field may contain wildcards.
The request verification process is described here. (If you are familiar with the access-checking source code, you will notice that the description here differs slightly from the algorithm used in the code. The description is equivalent to what the code actually does; it differs only to make the explanation simpler.)
For administrative requests (SHUTDOWN, RELOAD, etc.), the
server checks only the user table entry, because that is the only table
that specifies administrative privileges. Access is granted if the entry
allows the requested operation and denied otherwise. For example, if you
want to execute mysqladmin shutdown but your user table entry
doesn't grant the SHUTDOWN privilege to you, access is denied
without even checking the db or host tables. (They
contain no Shutdown_priv column, so there is no need to do so.)
For database-related requests (INSERT, UPDATE, etc.), the
server first checks the user's global (superuser) privileges by looking in
the user table entry. If the entry allows the requested operation,
access is granted. If the global privileges in the user table are
insufficient, the server determines the user's database-specific privileges
by checking the db and host tables:
db table for a match on the Host,
Db, and User fields. The Host and User fields are
matched to the connecting user's hostname and MySQL user name. The
Db field is matched to the database the user wants to access. If
there is no entry for the Host and User, access is denied.
db table entry and its Host field is
not blank, that entry defines the user's database-specific privileges.
db table entry's Host field is blank, it
signifies that the host table enumerates which hosts should be allowed
access to the database. In this case, a further lookup is done in the
host table to find a match on the Host and Db fields.
If no host table entry matches, access is denied. If there is a
match, the user's database-specific privileges are computed as the
intersection (not the union!) of the privileges in the db and
host table entries, that is, the privileges that are 'Y' in both
entries. (This way you can grant general privileges in the db table
entry and then selectively restrict them on a host-by-host basis using the
host table entries.)
After determining the database-specific privileges granted by the db
and host table entries, the server adds them to the global privileges
granted by the user table. If the result allows the requested
operation, access is granted. Otherwise, the server checks the user's
table and column privileges in the tables_priv and columns_priv
tables and adds those to the user's privileges. Access is allowed or denied
based on the result.
Expressed in boolean terms, the preceding description of how a user's privileges are calculated may be summarised like this:
global privileges OR (database privileges AND host privileges) OR table privileges OR column privileges
It may not be apparent why, if the global user entry privileges are
initially found to be insufficient for the requested operation, the server
adds those privileges to the database-, table-, and column-specific privileges
later. The reason is that a request might require more than one type of
privilege. For example, if you execute an INSERT ... SELECT
statement, you need both INSERT and SELECT privileges.
Your privileges might be such that the user table entry grants one
privilege and the db table entry grants the other. In this case, you
have the necessary privileges to perform the request, but the server cannot
tell that from either table by itself; the privileges granted by the entries
in both tables must be combined.
The host table can be used to maintain a list of secure servers.
At TcX, the host table contains a list of all machines on the local
network. These are granted all privileges.
You can also use the host table to indicate hosts that are not
secure. Suppose you have a machine public.your.domain that is located
in a public area that you do not consider secure. You can allow access to
all hosts on your network except that machine by using host table
entries
like this:
+--------------------+----+- | Host | Db | ... +--------------------+----+- | public.your.domain | % | ... (all privileges set to 'N') | %.your.domain | % | ... (all privileges set to 'Y') +--------------------+----+-
Naturally, you should always test your entries in the grant tables (for
example, using mysqlaccess) to make sure your access privileges are
actually set up the way you think they are.
Access denied Errors
If you encounter Access denied errors when you try to connect to the
MySQL server, the following list indicates some courses of
action you can take to correct the problem:
mysql_install_db
script to set up the initial grant table contents? If not, do so.
See section 4.3.4 Setting Up the Initial MySQL Privileges. Test the initial privileges by executing
this command:
shell> mysql -u root testThe server should let you connect without error. You should also make sure you have a file `user.MYD' in the MySQL database directory. Ordinarily, this is `PATH/var/mysql/user.MYD', where
PATH is the
pathname to the MySQL installation root.
shell> mysql -u root mysqlThe server should let you connect because the MySQL
root user
has no password initially. That is also a security risk, so setting the
root password is something you should do while you're setting up
your other MySQL users.
If you try to connect as root and get this error:
Access denied for user: '@unknown' to database mysqlthis means that you don't have an entry in the
user table with a
User column value of 'root' and that mysqld cannot
resolve the hostname for your client. In this case, you must restart the
server with the --skip-grant-tables option and edit your
`/etc/hosts' or `\windows\hosts' file to add an entry for your
host.
shell> mysqladmin -u root -pxxxx ver Access denied for user: 'root@localhost' (Using password: YES)It means that you are using a wrong password. See section 4.3.7 Setting Up Passwords. If you have forgot the root password, you can restart
mysqld with
--skip-grant-tables to change the password.
See section A.4.2 How to Reset a Forgotten Root Password.
If you get the above error even if you haven't specified a password,
this means that you a wrong password in some my.ini
file. See section 4.1.2 `my.cnf' Option Files. You can avoid using option files with the --no-defaults option, as follows:
shell> mysqladmin --no-defaults -u root ver
mysql_fix_privilege_tables script? If not, do so. The structure of
the grant tables changed with MySQL Version 3.22.11 when the
GRANT statement became functional.
PASSWORD() function if you set the password with the
INSERT, UPDATE, or SET PASSWORD statements. The
PASSWORD() function is unnecessary if you specify the password using
the GRANT ... IDENTIFIED BY statement or the mysqladmin
password command.
See section 4.3.7 Setting Up Passwords.
localhost is a synonym for your local hostname, and is also the
default host to which clients try to connect if you specify no host
explicitly. However, connections to localhost do not work if you are
using a MySQL version prior to 3.23.27 that uses MIT-pthreads
(localhost connections are made using Unix sockets, which were not
supported by MIT-pthreads at that time). To avoid this problem on such
systems, you should use the --host option to name
the server host explicitly. This will make a TCP/IP connection to the
mysqld server. In this case, you must have your real hostname in
user table entries on the server host. (This is true even if you are
running a client program on the same host as the server.)
Access denied error when trying to connect to the
database with mysql -u user_name db_name, you may have a problem
with the user table. Check this by executing mysql -u root
mysql and issuing this SQL statement:
mysql> SELECT * FROM user;The result should include an entry with the
Host and User
columns matching your computer's hostname and your MySQL user name.
Access denied error message will tell you who you are trying
to log in as, the host from which you are trying to connect, and whether
or not you were using a password. Normally, you should have one entry in
the user table that exactly matches the hostname and user name
that were given in the error message. For example if you get an error
message that contains Using password: NO, this means that you
tried to login without an password.
user table that matches that host:
Host ... is not allowed to connect to this MySQL serverYou can fix this by using the command-line tool
mysql (on the
server host!) to add a row to the user, db, or host
table for the user/hostname combination from which you are trying to
connect and then execute mysqladmin flush-privileges. If you are
not running MySQL Version 3.22 and you don't know the IP number or
hostname of the machine from which you are connecting, you should put an
entry with '%' as the Host column value in the user
table and restart mysqld with the --log option on the
server machine. After trying to connect from the client machine, the
information in the MySQL log will indicate how you really did
connect. (Then replace the '%' in the user table entry
with the actual hostname that shows up in the log. Otherwise, you'll
have a system that is insecure.)
Another reason for this error on Linux is that you are using a binary
MySQL version that is compiled with a different glibc version
than the one you are using. In this case you should either upgrade your
OS/glibc or download the source MySQL version and compile this
yourself. A source RPM is normally trivial to compile and install, so
this isn't a big problem.
shell> mysqladmin -u root -pxxxx -h some-hostname ver Access denied for user: 'root@' (Using password: YES)This means that MySQL got some error when trying to resolve the IP to a hostname. In this case you can execute
mysqladmin
flush-hosts to reset the internal DNS cache. See section 5.5.5 How MySQL uses DNS.
Some permanent solutions are:
mysqld with --skip-name-resolve.
mysqld with --skip-host-cache.
localhost if you are running the server and the client
on the same machine.
/etc/hosts.
mysql -u root test works but mysql -h your_hostname -u root
test results in Access denied, then you may not have the correct name
for your host in the user table. A common problem here is that the
Host value in the user table entry specifies an unqualified hostname,
but your system's name resolution routines return a fully qualified domain
name (or vice-versa). For example, if you have an entry with host
'tcx' in the user table, but your DNS tells MySQL that
your hostname is 'tcx.subnet.se', the entry will not work. Try adding
an entry to the user table that contains the IP number of your host as
the Host column value. (Alternatively, you could add an entry to the
user table with a Host value that contains a wildcard--for
example, 'tcx.%'. However, use of hostnames ending with `%' is
insecure and is not recommended!)
mysql -u user_name test works but mysql -u user_name
other_db_name doesn't work, you don't have an entry for other_db_name
listed in the db table.
mysql -u user_name db_name works when executed on the server
machine, but mysql -h host_name -u user_name db_name doesn't work when
executed on another client machine, you don't have the client machine listed
in the user table or the db table.
Access denied, remove from the
user table all entries that have Host values containing
wildcards (entries that contain `%' or `_'). A very common error
is to insert a new entry with Host='%' and
User='some user', thinking that this will allow you to specify
localhost to connect from the same machine. The reason that this
doesn't work is that the default privileges include an entry with
Host='localhost' and User=''. Because that entry
has a Host value 'localhost' that is more specific than
'%', it is used in preference to the new entry when connecting from
localhost! The correct procedure is to insert a second entry with
Host='localhost' and User='some_user', or to
remove the entry with Host='localhost' and
User=''.
db or
host table:
Access to database deniedIf the entry selected from the
db table has an empty value in the
Host column, make sure there are one or more corresponding entries in
the host table specifying which hosts the db table entry
applies to.
If you get the error when using the SQL commands SELECT ...
INTO OUTFILE or LOAD DATA INFILE, your entry in the user table
probably doesn't have the FILE privilege enabled.
Access denied when you run a client without any options, make
sure you haven't specified an old password in any of your option files!
See section 4.1.2 `my.cnf' Option Files.
INSERT or
UPDATE statement) and your changes seem to be ignored, remember
that you must issue a FLUSH PRIVILEGES statement or execute a
mysqladmin flush-privileges command to cause the server to re-read
the privilege tables. Otherwise, your changes have no effect until the
next time the server is restarted. Remember that after you set the
root password with an UPDATE command, you won't need to
specify it until after you flush the privileges, because the server
won't know you've changed the password yet!
mysql -u user_name db_name or mysql
-u user_name -pyour_pass db_name. If you are able to connect using the
mysql client, there is a problem with your program and not with the
access privileges. (Note that there is no space between -p and the
password; you can also use the --password=your_pass syntax to specify
the password. If you use the -p option alone, MySQL will
prompt you for the password.)
mysqld daemon with the
--skip-grant-tables option. Then you can change the MySQL
grant tables and use the mysqlaccess script to check whether
your modifications have the desired effect. When you are satisfied with your
changes, execute mysqladmin flush-privileges to tell the mysqld
server to start using the new grant tables. Note: reloading the
grant tables overrides the --skip-grant-tables option. This allows
you to tell the server to begin using the grant tables again without bringing
it down and restarting it.
mysqld daemon with a debugging
option (for example, --debug=d,general,query). This will print host and
user information about attempted connections, as well as information about
each command issued. See section E.1.2 Creating Trace Files.
mysqldump mysql command. As always, post your problem using
the mysqlbug script. See section 1.7.1.3 How to Report Bugs or Problems. In some cases you may need
to restart mysqld with --skip-grant-tables to run
mysqldump.
GRANT and REVOKE Syntax
GRANT priv_type [(column_list)] [, priv_type [(column_list)] ...]
ON {tbl_name | * | *.* | db_name.*}
TO user_name [IDENTIFIED BY [PASSWORD] 'password']
[, user_name [IDENTIFIED BY 'password'] ...]
[REQUIRE
NONE |
[{SSL| X509}]
[CIPHER cipher [AND]]
[ISSUER issuer [AND]]
[SUBJECT subject]]
[WITH [GRANT OPTION | MAX_QUERIES_PER_HOUR # |
MAX_UPDATES_PER_HOUR # |
MAX_CONNECTIONS_PER_HOUR #]]
REVOKE priv_type [(column_list)] [, priv_type [(column_list)] ...]
ON {tbl_name | * | *.* | db_name.*}
FROM user_name [, user_name ...]
GRANT is implemented in MySQL Version 3.22.11 or later. For
earlier MySQL versions, the GRANT statement does nothing.
The GRANT and REVOKE commands allow system administrators
to create users and grant and revoke rights to MySQL users at
four privilege levels:
mysql.user table.
REVOKE ALL ON *.* will revoke only global privileges.
mysql.db and mysql.host tables.
REVOKE ALL ON db.* will revoke only database privileges.
mysql.tables_priv table.
REVOKE ALL ON db.table will revoke only table privileges.
mysql.columns_priv table.
When using REVOKE you must specify the same columns that were granted.
For the GRANT and REVOKE statements, priv_type may be
specified as any of the following:
ALL [PRIVILEGES] | Sets all simple privileges except WITH GRANT OPTION
|
ALTER | Allows usage of ALTER TABLE
|
CREATE | Allows usage of CREATE TABLE
|
CREATE TEMPORARY TABLES | Allows usage of CREATE TEMPORARY TABLE
|
DELETE | Allows usage of DELETE
|
DROP | Allows usage of DROP TABLE.
|
EXECUTE | Allows the user to run stored procedures (MySQL 5.0) |
FILE | Allows usage of SELECT ... INTO OUTFILE and LOAD DATA INFILE.
|
INDEX | Allows usage of CREATE INDEX and DROP INDEX
|
INSERT | Allows usage of INSERT
|
LOCK TABLES | Allows usage of LOCK TABLES on tables for which one has the SELECT privilege.
|
PROCESS | Allows usage of SHOW FULL PROCESSLIST
|
REFERENCES | For the future |
RELOAD | Allows usage of FLUSH
|
REPLICATION CLIENT | Gives the right to the user to ask where the slaves/masters are. |
REPLICATION SLAVE | Needed for the replication slaves (to read binlogs from master). |
SELECT | Allows usage of SELECT
|
SHOW DATABASES | SHOW DATABASES shows all databases.
|
SHUTDOWN | Allows usage of mysqladmin shutdown
|
SUPER | Allows one connect (once) even if
max_connections is reached and execute commands CHANGE MASTER,
KILL thread, mysqladmin debug, PURGE [MASTER] LOGS and SET GLOBAL
|
UPDATE | Allows usage of UPDATE
|
USAGE | Synonym for ``no privileges.'' |
GRANT OPTION | Synonym for WITH GRANT OPTION
|
USAGE can be used when you want to create a user that has no privileges.
The privileges CREATE TEMPORARY TABLES, EXECUTE,
LOCK TABLES, REPLICATION ..., SHOW DATABASES and
SUPER are new for in version 4.0.2. To use these new privileges
after upgrading to 4.0.2, you have to run the
mysql_fix_privilege_tables script.
In older MySQL versions, the PROCESS privilege gives the same
rights as the new SUPER privilege.
To revoke the GRANT privilege from a user, use a priv_type
value of GRANT OPTION:
mysql> REVOKE GRANT OPTION ON ... FROM ...;
The only priv_type values you can specify for a table are SELECT,
INSERT, UPDATE, DELETE, CREATE, DROP,
GRANT OPTION, INDEX, and ALTER.
The only priv_type values you can specify for a column (that is, when
you use a column_list clause) are SELECT, INSERT, and
UPDATE.
MySQL allows you to create database level privileges even if the database doesn't exists to make it easy to prepare for database usage. Currently MySQL does however not allow one to create table level grants if the table doesn't exists. MySQL will not automatically revoke any privileges even if you drop a table or drop a database.
You can set global privileges by using ON *.* syntax. You can set
database privileges by using ON db_name.* syntax. If you specify
ON * and you have a current database, you will set the privileges for
that database. (Warning: if you specify ON * and you
don't have a current database, you will affect the global privileges!)
Please note: the `_' and `%' wildcards are allowed when
specifying database names in GRANT commands. This means that if you
wish to use for instance a `_' character as part of a database name,
you should specify it as `\_' in the GRANT command, to prevent
the user from being able to access additional databases matching the
wildcard pattern, e.g., GRANT ... ON `foo\_bar`.* TO ....
In order to accommodate granting rights to users from arbitrary hosts,
MySQL supports specifying the user_name value in the form
user@host. If you want to specify a user string
containing special characters (such as `-'), or a host string
containing special characters or wildcard characters (such as `%'), you
can quote the user or host name (for example, 'test-user'@'test-hostname').
You can specify wildcards in the hostname. For example,
user@'%.loc.gov' applies to user for any host in the
loc.gov domain, and user@'144.155.166.%' applies to user
for any host in the 144.155.166 class C subnet.
The simple form user is a synonym for user@"%".
MySQL doesn't support wildcards in user names. Anonymous users are
defined by inserting entries with User='' into the
mysql.user table or creating an user with an empty name with the
GRANT command.
Note: if you allow anonymous users to connect to the MySQL
server, you should also grant privileges to all local users as
user@localhost because otherwise the anonymous user entry for
the local host in the mysql.user table will be used when the user
tries to log into the MySQL server from the local machine!
You can verify if this applies to you by executing this query:
mysql> SELECT Host,User FROM mysql.user WHERE User='';
For the moment, GRANT only supports host, table, database, and
column names up to 60 characters long. A user name can be up to 16
characters.
The privileges for a table or column are formed from the
logical OR of the privileges at each of the four privilege
levels. For example, if the mysql.user table specifies that a
user has a global SELECT privilege, this can't be denied by an
entry at the database, table, or column level.
The privileges for a column can be calculated as follows:
global privileges OR (database privileges AND host privileges) OR table privileges OR column privileges
In most cases, you grant rights to a user at only one of the privilege levels, so life isn't normally as complicated as above. The details of the privilege-checking procedure are presented in section 4.2 General Security Issues and the MySQL Access Privilege System.
If you grant privileges for a user/hostname combination that does not exist
in the mysql.user table, an entry is added and remains there until
deleted with a DELETE command. In other words, GRANT may
create user table entries, but REVOKE will not remove them;
you must do that explicitly using DELETE.
In MySQL Version 3.22.12 or later,
if a new user is created or if you have global grant privileges, the user's
password will be set to the password specified by the IDENTIFIED BY
clause, if one is given. If the user already had a password, it is replaced
by the new one.
If you don't want to send the password in clear text you can use the
PASSWORD option followed by a scrambled password from SQL
function PASSWORD() or the C API function
make_scrambled_password(char *to, const char *password).
Warning: if you create a new user but do not specify an
IDENTIFIED BY clause, the user has no password. This is insecure.
Passwords can also be set with the SET PASSWORD command.
See section 5.5.6 SET Syntax.
If you grant privileges for a database, an entry in the mysql.db
table is created if needed. When all privileges for the database have been
removed with REVOKE, this entry is deleted.
If a user doesn't have any privileges on a table, the table is not displayed
when the user requests a list of tables (for example, with a SHOW TABLES
statement). The same is true for SHOW DATABASES.
The WITH GRANT OPTION clause gives the user the ability to give
to other users any privileges the user has at the specified privilege level.
You should be careful to whom you give the GRANT privilege, as two
users with different privileges may be able to join privileges!
MAX_QUERIES_PER_HOUR #, MAX_UPDATES_PER_HOUR # and
MAX_CONNECTIONS_PER_HOUR # are new in MySQL version 4.0.2.
These options limit the number of queries/updates and logins the user can
do during one hour. If # is 0 (default), then this means that there
are no limitations for that user. See section 4.3.6 Limiting user resources.
Note: to specify any of these options for an existing user without adding
other additional privileges, use GRANT USAGE ... WITH MAX_....
You cannot grant another user a privilege you don't have yourself;
the GRANT privilege allows you to give away only those privileges
you possess.
Be aware that when you grant a user the GRANT privilege at a
particular privilege level, any privileges the user already possesses (or
is given in the future!) at that level are also grantable by that user.
Suppose you grant a user the INSERT privilege on a database. If
you then grant the SELECT privilege on the database and specify
WITH GRANT OPTION, the user can give away not only the SELECT
privilege, but also INSERT. If you then grant the UPDATE
privilege to the user on the database, the user can give away the
INSERT, SELECT and UPDATE.
You should not grant ALTER privileges to a normal user. If you
do that, the user can try to subvert the privilege system by renaming
tables!
Note that if you are using table or column privileges for even one user, the server examines table and column privileges for all users and this will slow down MySQL a bit.
When mysqld starts, all privileges are read into memory.
Database, table, and column privileges take effect at once, and
user-level privileges take effect the next time the user connects.
Modifications to the grant tables that you perform using GRANT or
REVOKE are noticed by the server immediately.
If you modify the grant tables manually (using INSERT, UPDATE,
etc.), you should execute a FLUSH PRIVILEGES statement or run
mysqladmin flush-privileges to tell the server to reload the grant
tables.
See section 4.3.3 When Privilege Changes Take Effect.
The biggest differences between the SQL standard and MySQL versions of
GRANT are:
TRIGGER or UNDER
privileges.
INSERT privilege on only some of the
columns in a table, you can execute INSERT statements on the
table; the columns for which you don't have the INSERT privilege
will be set to their default values. SQL-99 requires you to have the
INSERT privilege on all columns.
REVOKE commands or by manipulating the
MySQL grant tables.
For a description of using REQUIRE, see section 4.3.9 Using Secure Connections.
There are several distinctions between the way user names and passwords are used by MySQL and the way they are used by Unix or Windows:
-u or
--user options. This means that you can't make a database secure in
any way unless all MySQL user names have passwords. Anyone may
attempt to connect to the server using any name, and they will succeed if
they specify any name that doesn't have a password.
PASSWORD() and ENCRYPT() functions in section 6.3.6.2 Miscellaneous Functions. Note that even if the password is stored 'scrambled', and
knowing your 'scrambled' password is enough to be able to connect to
the MySQL server!
From version 4.1, MySQL employs a different password and login mechanism
that is secure even if TCP/IP packets are sniffed and/or the mysql
database is captured.
MySQL users and their privileges are normally created with the
GRANT command. See section 4.3.1 GRANT and REVOKE Syntax.
When you login to a MySQL server with a command-line client you
should specify the password with --password=your-password.
See section 4.2.8 Connecting to the MySQL Server.
mysql --user=monty --password=guess database_name
If you want the client to prompt for a password, you should use
--password without any argument
mysql --user=monty --password database_name
or the short form:
mysql -u monty -p database_name
Note that in the last example the password is not 'database_name'.
If you want to use the -p option to supply a password you should do so
like this:
mysql -u monty -pguess database_name
On some systems, the library call that MySQL uses to prompt for a password will automatically cut the password to 8 characters. Internally MySQL doesn't have any limit for the length of the password.
When mysqld starts, all grant table contents are read into memory and
become effective at that point.
Modifications to the grant tables that you perform using GRANT,
REVOKE, or SET PASSWORD are noticed by the server immediately.
If you modify the grant tables manually (using INSERT, UPDATE,
etc.), you should execute a FLUSH PRIVILEGES statement or run
mysqladmin flush-privileges or mysqladmin reload to tell the
server to reload the grant tables. Otherwise, your changes will have no
effect until you restart the server. If you change the grant tables manually
but forget to reload the privileges, you will be wondering why your changes
don't seem to make any difference!
When the server notices that the grant tables have been changed, existing client connections are affected as follows:
USE db_name
command.
After installing MySQL, you set up the initial access privileges by
running scripts/mysql_install_db.
See section 2.3.1 Quick Installation Overview.
The mysql_install_db script starts up the mysqld
server, then initialises the grant tables to contain the following set
of privileges:
root user is created as a superuser who can do
anything. Connections must be made from the local host.
Note:
The initial root password is empty, so anyone can connect as root
without a password and be granted all privileges.
'test' or starting with 'test_'. Connections must be
made from the local host. This means any local user can connect without a
password and be treated as the anonymous user.
mysqladmin shutdown or mysqladmin processlist.
Note: the default privileges are different for Windows. See section 2.6.2.3 Running MySQL on Windows.
Because your installation is initially wide open, one of the first things you
should do is specify a password for the MySQL
root user. You can do this as follows (note that you specify the
password using the PASSWORD() function):
shell> mysql -u root mysql
mysql> SET PASSWORD FOR root@localhost=PASSWORD('new_password');
If you know what you are doing, you can also directly manipulate the privilege tables:
shell> mysql -u root mysql
mysql> UPDATE user SET Password=PASSWORD('new_password')
-> WHERE user='root';
mysql> FLUSH PRIVILEGES;
Another way to set the password is by using the mysqladmin command:
shell> mysqladmin -u root password new_password
Only users with write/update access to the mysql database can change the
password for others users. All normal users (not anonymous ones) can only
change their own password with either of the above commands or with
SET PASSWORD=PASSWORD('new password').
Note that if you update the password in the user table directly using
the first method, you must tell the server to re-read the grant tables (with
FLUSH PRIVILEGES), because the change will go unnoticed otherwise.
Once the root password has been set, thereafter you must supply that
password when you connect to the server as root.
You may wish to leave the root password blank so that you don't need
to specify it while you perform additional setup or testing. However, be sure
to set it before using your installation for any real production work.
See the scripts/mysql_install_db script to see how it sets up
the default privileges. You can use this as a basis to see how to
add other users.
If you want the initial privileges to be different from those just described
above, you can modify mysql_install_db before you run it.
To re-create the grant tables completely, remove all the `.frm',
`.MYI', and `.MYD' files in the directory containing the
mysql database. (This is the directory named `mysql' under
the database directory, which is listed when you run mysqld
--help.) Then run the mysql_install_db script, possibly after
editing it first to have the privileges you want.
Note: for MySQL versions older than Version 3.22.10,
you should not delete the `.frm' files. If you accidentally do this,
you should copy them back from your MySQL distribution before
running mysql_install_db.
You can add users two different ways: by using GRANT statements
or by manipulating the MySQL grant tables directly. The
preferred method is to use GRANT statements, because they are
more concise and less error-prone. See section 4.3.1 GRANT and REVOKE Syntax.
There are also a lot of contributed programs like phpmyadmin
that can be used to create and administrate users.
The following examples show how to use the mysql client to set up new
users. These examples assume that privileges are set up according to the
defaults described in the previous section. This means that to make changes,
you must be on the same machine where mysqld is running, you must
connect as the MySQL root user, and the root user must
have the INSERT privilege for the mysql database and the
RELOAD administrative privilege. Also, if you have changed the
root user password, you must specify it for the mysql commands here.
You can add new users by issuing GRANT statements:
shell> mysql --user=root mysql
mysql> GRANT ALL PRIVILEGES ON *.* TO monty@localhost
-> IDENTIFIED BY 'some_pass' WITH GRANT OPTION;
mysql> GRANT ALL PRIVILEGES ON *.* TO monty@"%"
-> IDENTIFIED BY 'some_pass' WITH GRANT OPTION;
mysql> GRANT RELOAD,PROCESS ON *.* TO admin@localhost;
mysql> GRANT USAGE ON *.* TO dummy@localhost;
These GRANT statements set up three new users:
monty
'some_pass' to do so. Note that we must issue
GRANT statements for both monty@localhost and
monty@"%". If we don't add the entry with localhost, the
anonymous user entry for localhost that is created by
mysql_install_db will take precedence when we connect from the local
host, because it has a more specific Host field value and thus comes
earlier in the user table sort order.
admin
localhost without a password and who is
granted the RELOAD and PROCESS administrative privileges.
This allows the user to execute the mysqladmin reload,
mysqladmin refresh, and mysqladmin flush-* commands, as well as
mysqladmin processlist . No database-related privileges are granted.
(They can be granted later by issuing additional GRANT statements.)
dummy
'N'@-the USAGE privilege
type allows you to create a user with no privileges. It is assumed that you
will grant database-specific privileges later.
You can also add the same user access information directly by issuing
INSERT statements and then telling the server to reload the grant
tables:
shell> mysql --user=root mysql
mysql> INSERT INTO user VALUES('localhost','monty',PASSWORD('some_pass'),
-> 'Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y');
mysql> INSERT INTO user VALUES('%','monty',PASSWORD('some_pass'),
-> 'Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y','Y');
mysql> INSERT INTO user SET Host='localhost',User='admin',
-> Reload_priv='Y', Process_priv='Y';
mysql> INSERT INTO user (Host,User,Password)
-> VALUES('localhost','dummy','');
mysql> FLUSH PRIVILEGES;
Depending on your MySQL version, you may have to use a different
number of 'Y' values above (versions prior to Version 3.22.11 had fewer
privilege columns). For the admin user, the more readable extended
INSERT syntax that is available starting with Version 3.22.11 is used.
Note that to set up a superuser, you need only create a user table
entry with the privilege fields set to 'Y'. No db or
host table entries are necessary.
The privilege columns in the user table were not set explicitly in the
last INSERT statement (for the dummy user), so those columns
are assigned the default value of 'N'. This is the same thing that
GRANT USAGE does.
The following example adds a user custom who can connect from hosts
localhost, server.domain, and whitehouse.gov. He wants
to access the bankaccount database only from localhost,
the expenses database only from whitehouse.gov, and
the customer database from all three hosts. He wants
to use the password stupid from all three hosts.
To set up this user's privileges using GRANT statements, run these
commands:
shell> mysql --user=root mysql
mysql> GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP
-> ON bankaccount.*
-> TO custom@localhost
-> IDENTIFIED BY 'stupid';
mysql> GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP
-> ON expenses.*
-> TO custom@whitehouse.gov
-> IDENTIFIED BY 'stupid';
mysql> GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP
-> ON customer.*
-> TO custom@'%'
-> IDENTIFIED BY 'stupid';
The reason that we issue two grant statements for the user 'custom' is that we want the give the user access to MySQL both from the local machine with Unix sockets and from the remote machine 'whitehouse.gov' over TCP/IP.
To set up the user's privileges by modifying the grant tables directly,
run these commands (note the FLUSH PRIVILEGES at the end):
shell> mysql --user=root mysql
mysql> INSERT INTO user (Host,User,Password)
-> VALUES('localhost','custom',PASSWORD('stupid'));
mysql> INSERT INTO user (Host,User,Password)
-> VALUES('server.domain','custom',PASSWORD('stupid'));
mysql> INSERT INTO user (Host,User,Password)
-> VALUES('whitehouse.gov','custom',PASSWORD('stupid'));
mysql> INSERT INTO db
-> (Host,Db,User,Select_priv,Insert_priv,Update_priv,Delete_priv,
-> Create_priv,Drop_priv)
-> VALUES
-> ('localhost','bankaccount','custom','Y','Y','Y','Y','Y','Y');
mysql> INSERT INTO db
-> (Host,Db,User,Select_priv,Insert_priv,Update_priv,Delete_priv,
-> Create_priv,Drop_priv)
-> VALUES
-> ('whitehouse.gov','expenses','custom','Y','Y','Y','Y','Y','Y');
mysql> INSERT INTO db
-> (Host,Db,User,Select_priv,Insert_priv,Update_priv,Delete_priv,
-> Create_priv,Drop_priv)
-> VALUES('%','customer','custom','Y','Y','Y','Y','Y','Y');
mysql> FLUSH PRIVILEGES;
The first three INSERT statements add user table entries that
allow user custom to connect from the various hosts with the given
password, but grant no permissions to him (all privileges are set to the
default value of 'N'). The next three INSERT statements add
db table entries that grant privileges to custom for the
bankaccount, expenses, and customer databases, but only
when accessed from the proper hosts. As usual, when the grant tables are
modified directly, the server must be told to reload them (with
FLUSH PRIVILEGES) so that the privilege changes take effect.
If you want to give a specific user access from any machine in a given
domain, you can issue a GRANT statement like the following:
mysql> GRANT ...
-> ON *.*
-> TO myusername@"%.mydomainname.com"
-> IDENTIFIED BY 'mypassword';
To do the same thing by modifying the grant tables directly, do this:
mysql> INSERT INTO user VALUES ('%.mydomainname.com', 'myusername',
-> PASSWORD('mypassword'),...);
mysql> FLUSH PRIVILEGES;
Starting from MySQL 4.0.2 one can limit certain resources per user.
So far, the only available method of limiting usage of MySQL
server resources has been setting the max_user_connections
startup variable to a non-zero value. But this method is strictly
global and does not allow for management of individual users, which
could be of particular interest to Internet Service Providers.
Therefore, management of three resources is introduced on the individual user level:
A user in the aforementioned context is a single entry in the
user table, which is uniquely identified by its user
and host columns.
All users are by default not limited in using the above resources,
unless the limits are granted to them. These limits can be granted
only via global GRANT (*.*), using this syntax:
GRANT ... WITH MAX_QUERIES_PER_HOUR N1
MAX_UPDATES_PER_HOUR N2
MAX_CONNECTIONS_PER_HOUR N3;
One can specify any combination of the above resources. N1, N2 and N3 are integers and stands for count / hour.
If user reaches any of the above limits within one hour, his connection will be terminated or refused and the appropriate error message shall be issued.
Current usage values for a particular user can be flushed (set to zero)
by issuing a GRANT statement with any of the above clauses,
including a GRANT statement with the current values.
Also, current values for all users will be flushed if privileges are
reloaded (in the server or using mysqladmin reload)
or if the FLUSH USER_RESOURCES command is issued.
The feature is enabled as soon as a single user is granted with any
of the limiting GRANT clauses.
As a prerequisite for enabling this feature, the user table in
the mysql database must contain the additional columns, as
defined in the table creation scripts mysql_install_db and
mysql_install_db.sh in `scripts' subdirectory.
In most cases you should use GRANT to set up your users/passwords,
so the following only applies for advanced users. See section 4.3.1 GRANT and REVOKE Syntax.
The examples in the preceding sections illustrate an important principle:
when you store a non-empty password using INSERT or UPDATE
statements, you must use the PASSWORD() function to encrypt it. This
is because the user table stores passwords in encrypted form, not as
plaintext. If you forget that fact, you are likely to attempt to set
passwords like this:
shell> mysql -u root mysql
mysql> INSERT INTO user (Host,User,Password)
-> VALUES('%','jeffrey','biscuit');
mysql> FLUSH PRIVILEGES;
The result is that the plaintext value 'biscuit' is stored as the
password in the user table. When the user jeffrey attempts to
connect to the server using this password, the mysql client encrypts
it with PASSWORD(), generates an authentification vector
based on encrypted password and a random number,
obtained from server, and sends the result to the server.
The server uses the password value in the user table
(that is not encrypted value 'biscuit')
to perform the same calculations, and compares results.
The comparison fails and the server rejects the
connection:
shell> mysql -u jeffrey -pbiscuit test Access denied
Passwords must be encrypted when they are inserted in the user
table, so the INSERT statement should have been specified like this
instead:
mysql> INSERT INTO user (Host,User,Password)
-> VALUES('%','jeffrey',PASSWORD('biscuit'));
You must also use the PASSWORD() function when you use SET
PASSWORD statements:
mysql> SET PASSWORD FOR jeffrey@"%" = PASSWORD('biscuit');
If you set passwords using the GRANT ... IDENTIFIED BY statement
or the mysqladmin password command, the PASSWORD() function
is unnecessary. They both take care of encrypting the password for you,
so you would specify a password of 'biscuit' like this:
mysql> GRANT USAGE ON *.* TO jeffrey@"%" IDENTIFIED BY 'biscuit';
or
shell> mysqladmin -u jeffrey password biscuit
Note: PASSWORD() is different from Unix password encryption.
See section 4.3.2 MySQL User Names and Passwords.
It is inadvisable to specify your password in a way that exposes it to discovery by other users. The methods you can use to specify your password when you run client programs are listed here, along with an assessment of the risks of each method:
mysql.user table. Knowing
the encrypted password for a user makes it possible to login as this
user. The passwords are only scrambled so that one shouldn't be able to
see the real password you used (if you happen to use a similar password
with your other applications).
-pyour_pass or --password=your_pass option on the command
line. This is convenient but insecure, because your password becomes visible
to system status programs (such as ps) that may be invoked by other
users to display command-lines. (MySQL clients typically overwrite
the command-line argument with zeroes during their initialisation sequence,
but there is still a brief interval during which the value is visible.)
-p or --password option (with no your_pass value
specified). In this case, the client program solicits the password from
the terminal:
shell> mysql -u user_name -p Enter password: ********The `*' characters represent your password. It is more secure to enter your password this way than to specify it on the command-line because it is not visible to other users. However, this method of entering a password is suitable only for programs that you run interactively. If you want to invoke a client from a script that runs non-interactively, there is no opportunity to enter the password from the terminal. On some systems, you may even find that the first line of your script is read and interpreted (incorrectly) as your password!
[client] section of the `.my.cnf' file in your
home directory:
[client] password=your_passIf you store your password in `.my.cnf', the file should not be group or world readable or writable. Make sure the file's access mode is
400
or 600.
See section 4.1.2 `my.cnf' Option Files.
MYSQL_PWD environment variable, but
this method must be considered extremely insecure and should not be used.
Some versions of ps include an option to display the environment of
running processes; your password will be in plain sight for all to see if
you set MYSQL_PWD. Even on systems without such a version of
ps, it is unwise to assume there is no other method to observe process
environments. See section F Environment Variables.
All in all, the safest methods are to have the client program prompt for the password or to specify the password in a properly protected `.my.cnf' file.
Beginning with version 4.0.0, MySQL has support for SSL encrypted connections. To understand how MySQL uses SSL, it's necessary to explain some basic SSL and X509 concepts. People who are already familiar with them can skip this part.
By default, MySQL uses unencrypted connections between the client and the server. This means that someone could watch all your traffic and look at the data being sent or received. They could even change the data while it is in transit between client and server. Sometimes you need to move information over public networks in a secure fashion; in such cases, using an unencrypted connection is unacceptable.
SSL is a protocol that uses different encryption algorithms to ensure that data received over a public network can be trusted. It has mechanisms to detect any change, loss or replay of data. SSL also incorporates algorithms to recognise and provide identity verification using the X509 standard.
Encryption is the way to make any kind of data unreadable. In fact, today's practice requires many additional security elements from encryption algorithms. They should resist many kind of known attacks like just messing with the order of encrypted messages or replaying data twice.
X509 is a standard that makes it possible to identify someone on the Internet. It is most commonly used in e-commerce applications. In basic terms, there should be some company (called a ``Certificate Authority'') that assigns electronic certificates to anyone who needs them. Certificates rely on asymmetric encryption algorithms that have two encryption keys (a public key and a secret key). A certificate owner can prove his identity by showing his certificate to other party. A certificate consists of its owner's public key. Any data encrypted with this public key can be decrypted only using the corresponding secret key, which is held by the owner of the certificate.
MySQL doesn't use encrypted connections by default, because doing so would make the client/server protocol much slower. Any kind of additional functionality requires the computer to do additional work and encrypting data is a CPU-intensive operation that requires time and can delay MySQL main tasks. By default MySQL is tuned to be fast as possible.
If you need more information about SSL, X509, or encryption, you should use your favourite Internet search engine and search for keywords in which you are interested.
To get secure connections to work with MySQL you must do the following:
--with-vio --with-openssl.
mysql.user table with some new SSL-related columns. You can do this by
running the mysql_fix_privilege_tables.sh script.
This is necessary if your grant tables date from a version prior to MySQL
4.0.0.
mysqld server supports OpenSSL by
examining if SHOW VARIABLES LIKE 'have_openssl' returns YES.
Here is an example for setting up SSL certificates for MySQL:
DIR=`pwd`/openssl
PRIV=$DIR/private
mkdir $DIR $PRIV $DIR/newcerts
cp /usr/share/ssl/openssl.cnf $DIR
replace ./demoCA $DIR -- $DIR/openssl.cnf
# Create necessary files: $database, $serial and $new_certs_dir
# directory (optional)
touch $DIR/index.txt
echo "01" > $DIR/serial
#
# Generation of Certificate Authority(CA)
#
openssl req -new -x509 -keyout $PRIV/cakey.pem -out $DIR/cacert.pem \
-config $DIR/openssl.cnf
# Sample output:
# Using configuration from /home/monty/openssl/openssl.cnf
# Generating a 1024 bit RSA private key
# ................++++++
# .........++++++
# writing new private key to '/home/monty/openssl/private/cakey.pem'
# Enter PEM pass phrase:
# Verifying password - Enter PEM pass phrase:
# -----
# You are about to be asked to enter information that will be incorporated
# into your certificate request.
# What you are about to enter is what is called a Distinguished Name or a DN.
# There are quite a few fields but you can leave some blank
# For some fields there will be a default value,
# If you enter '.', the field will be left blank.
# -----
# Country Name (2 letter code) [AU]:FI
# State or Province Name (full name) [Some-State]:.
# Locality Name (eg, city) []:
# Organization Name (eg, company) [Internet Widgits Pty Ltd]:MySQL AB
# Organizational Unit Name (eg, section) []:
# Common Name (eg, YOUR name) []:MySQL admin
# Email Address []:
#
# Create server request and key
#
openssl req -new -keyout $DIR/server-key.pem -out \
$DIR/server-req.pem -days 3600 -config $DIR/openssl.cnf
# Sample output:
# Using configuration from /home/monty/openssl/openssl.cnf
# Generating a 1024 bit RSA private key
# ..++++++
# ..........++++++
# writing new private key to '/home/monty/openssl/server-key.pem'
# Enter PEM pass phrase:
# Verifying password - Enter PEM pass phrase:
# -----
# You are about to be asked to enter information that will be incorporated
# into your certificate request.
# What you are about to enter is what is called a Distinguished Name or a DN.
# There are quite a few fields but you can leave some blank
# For some fields there will be a default value,
# If you enter '.', the field will be left blank.
# -----
# Country Name (2 letter code) [AU]:FI
# State or Province Name (full name) [Some-State]:.
# Locality Name (eg, city) []:
# Organization Name (eg, company) [Internet Widgits Pty Ltd]:MySQL AB
# Organizational Unit Name (eg, section) []:
# Common Name (eg, YOUR name) []:MySQL server
# Email Address []:
#
# Please enter the following 'extra' attributes
# to be sent with your certificate request
# A challenge password []:
# An optional company name []:
#
# Remove the passphrase from the key (optional)
#
openssl rsa -in $DIR/server-key.pem -out $DIR/server-key.pem
#
# Sign server cert
#
openssl ca -policy policy_anything -out $DIR/server-cert.pem \
-config $DIR/openssl.cnf -infiles $DIR/server-req.pem
# Sample output:
# Using configuration from /home/monty/openssl/openssl.cnf
# Enter PEM pass phrase:
# Check that the request matches the signature
# Signature ok
# The Subjects Distinguished Name is as follows
# countryName :PRINTABLE:'FI'
# organizationName :PRINTABLE:'MySQL AB'
# commonName :PRINTABLE:'MySQL admin'
# Certificate is to be certified until Sep 13 14:22:46 2003 GMT (365 days)
# Sign the certificate? [y/n]:y
#
#
# 1 out of 1 certificate requests certified, commit? [y/n]y
# Write out database with 1 new entries
# Data Base Updated
#
# Create client request and key
#
openssl req -new -keyout $DIR/client-key.pem -out \
$DIR/client-req.pem -days 3600 -config $DIR/openssl.cnf
# Sample output:
# Using configuration from /home/monty/openssl/openssl.cnf
# Generating a 1024 bit RSA private key
# .....................................++++++
# .............................................++++++
# writing new private key to '/home/monty/openssl/client-key.pem'
# Enter PEM pass phrase:
# Verifying password - Enter PEM pass phrase:
# -----
# You are about to be asked to enter information that will be incorporated
# into your certificate request.
# What you are about to enter is what is called a Distinguished Name or a DN.
# There are quite a few fields but you can leave some blank
# For some fields there will be a default value,
# If you enter '.', the field will be left blank.
# -----
# Country Name (2 letter code) [AU]:FI
# State or Province Name (full name) [Some-State]:.
# Locality Name (eg, city) []:
# Organization Name (eg, company) [Internet Widgits Pty Ltd]:MySQL AB
# Organizational Unit Name (eg, section) []:
# Common Name (eg, YOUR name) []:MySQL user
# Email Address []:
#
# Please enter the following 'extra' attributes
# to be sent with your certificate request
# A challenge password []:
# An optional company name []:
#
# Remove a passphrase from the key (optional)
#
openssl rsa -in $DIR/client-key.pem -out $DIR/client-key.pem
#
# Sign client cert
#
openssl ca -policy policy_anything -out $DIR/client-cert.pem \
-config $DIR/openssl.cnf -infiles $DIR/client-req.pem
# Sample output:
# Using configuration from /home/monty/openssl/openssl.cnf
# Enter PEM pass phrase:
# Check that the request matches the signature
# Signature ok
# The Subjects Distinguished Name is as follows
# countryName :PRINTABLE:'FI'
# organizationName :PRINTABLE:'MySQL AB'
# commonName :PRINTABLE:'MySQL user'
# Certificate is to be certified until Sep 13 16:45:17 2003 GMT (365 days)
# Sign the certificate? [y/n]:y
#
#
# 1 out of 1 certificate requests certified, commit? [y/n]y
# Write out database with 1 new entries
# Data Base Updated
#
# Create a my.cnf file that you can use to test the certificates
#
cnf=""
cnf="$cnf [client]"
cnf="$cnf ssl-ca=$DIR/cacert.pem"
cnf="$cnf ssl-cert=$DIR/client-cert.pem"
cnf="$cnf ssl-key=$DIR/client-key.pem"
cnf="$cnf [mysqld]"
cnf="$cnf ssl-ca=$DIR/cacert.pem"
cnf="$cnf ssl-cert=$DIR/server-cert.pem"
cnf="$cnf ssl-key=$DIR/server-key.pem"
echo $cnf | replace " " '
' > $DIR/my.cnf
#
# To test MySQL
mysqld --defaults-file=$DIR/my.cnf &
mysql --defaults-file=$DIR/my.cnf
You can also test your setup by modifying the above `my.cnf' file to refer to the demo certificates in the mysql-source-dist/SSL direcory.
GRANT OptionsMySQL can check X509 certificate attributes in addition to the normal username/password scheme. All the usual options are still required (username, password, IP address mask, database/table name).
There are different possibilities to limit connections:
REQUIRE SSL option limits the server to allow only SSL
encrypted connections. Note that this option can be omitted
if there are any ACL records which allow non-SSL connections.
mysql> GRANT ALL PRIVILEGES ON test.* TO root@localhost
-> IDENTIFIED BY "goodsecret" REQUIRE SSL;
REQUIRE X509 means that the client should have a valid certificate
but we do not care about the exact certificate, issuer or subject.
The only restriction is that it should be possible to verify its
signature with one of the CA certificates.
mysql> GRANT ALL PRIVILEGES ON test.* TO root@localhost
-> IDENTIFIED BY "goodsecret" REQUIRE X509;
REQUIRE ISSUER "issuer" places a restriction on connection attempts:
The client must present a valid X509 certificate issued by CA "issuer".
Using X509 certificates always implies encryption, so the SSL option
is unneccessary.
mysql> GRANT ALL PRIVILEGES ON test.* TO root@localhost
-> IDENTIFIED BY "goodsecret"
-> REQUIRE ISSUER "C=FI, ST=Some-State, L=Helsinki,
"> O=MySQL Finland AB, CN=Tonu Samuel/Email=tonu@mysql.com";
REQUIRE SUBJECT "subject" requires clients to have valid X509
certificate with subject "subject" on it. If the client presents a
certificate that is valid but has a different "subject", the connection
is disallowed.
mysql> GRANT ALL PRIVILEGES ON test.* TO root@localhost
-> IDENTIFIED BY "goodsecret"
-> REQUIRE SUBJECT "C=EE, ST=Some-State, L=Tallinn,
"> O=MySQL demo client certificate,
"> CN=Tonu Samuel/Email=tonu@mysql.com";
REQUIRE CIPHER "cipher" is needed to assure enough strong ciphers
and keylengths will be used. SSL itself can be weak if old algorithms
with short encryption keys are used. Using this option, we can ask for
some exact cipher method to allow a connection.
mysql> GRANT ALL PRIVILEGES ON test.* TO root@localhost
-> IDENTIFIED BY "goodsecret"
-> REQUIRE CIPHER "EDH-RSA-DES-CBC3-SHA";
The SUBJECT, ISSUER, and CIPHER options can be
combined in the REQUIRE clause like this:
mysql> GRANT ALL PRIVILEGES ON test.* TO root@localhost
-> IDENTIFIED BY "goodsecret"
-> REQUIRE SUBJECT "C=EE, ST=Some-State, L=Tallinn,
"> O=MySQL demo client certificate,
"> CN=Tonu Samuel/Email=tonu@mysql.com"
-> AND ISSUER "C=FI, ST=Some-State, L=Helsinki,
"> O=MySQL Finland AB, CN=Tonu Samuel/Email=tonu@mysql.com"
-> AND CIPHER "EDH-RSA-DES-CBC3-SHA";
Starting from MySQL 4.0.4 the AND keyword is optional between
REQUIRE options.
The order of the options does not matter, but no option can be specified
twice.
Because MySQL tables are stored as files, it is easy to do a
backup. To get a consistent backup, do a LOCK TABLES on the
relevant tables followed by FLUSH TABLES for the tables.
See section 6.7.2 LOCK TABLES/UNLOCK TABLES Syntax.
See section 4.5.3 FLUSH Syntax.
You only need a read lock; this allows other threads to continue to
query the tables while you are making a copy of the files in the
database directory. The FLUSH TABLE is needed to ensure that
the all active index pages is written to disk before you start the backup.
Starting from 3.23.56 and 4.0.12 BACKUP TABLE will not allow you
to overwrite existing files as this would be a security risk.
If you want to make a SQL level backup of a table, you can use
SELECT INTO OUTFILE or BACKUP TABLE. See section 6.4.1 SELECT Syntax.
See section 4.4.2 BACKUP TABLE Syntax.
Another way to back up a database is to use the mysqldump program or
the mysqlhotcopy script. See section 4.8.5 mysqldump, Dumping Table Structure and Data.
See section 4.8.6 mysqlhotcopy, Copying MySQL Databases and Tables.
shell> mysqldump --tab=/path/to/some/dir --opt --all or shell> mysqlhotcopy database /path/to/some/dirYou can also simply copy all table files (`*.frm', `*.MYD', and `*.MYI' files) as long as the server isn't updating anything. The script
mysqlhotcopy does use this method.
mysqld if it's running, then start it with the
--log-bin[=file_name] option. See section 4.9.4 The Binary Log. The binary
log file(s) provide you with the information you need to replicate
changes to the database that are made subsequent to the point at which
you executed mysqldump.
If you have to restore something, try to recover your tables using
REPAIR TABLE or myisamchk -r first. That should work in
99.9% of all cases. If myisamchk fails, try the following
procedure (this will only work if you have started MySQL with
--log-bin, see section 4.9.4 The Binary Log):
mysqldump backup.
shell> mysqlbinlog hostname-bin.[0-9]* | mysqlIf you are using the update log (which will be removed in MySQL 5.0) you can use:
shell> ls -1 -t -r hostname.[0-9]* | xargs cat | mysql
ls is used to get all the update log files in the right order.
You can also do selective backups with SELECT * INTO OUTFILE 'file_name'
FROM tbl_name and restore with LOAD DATA INFILE 'file_name' REPLACE
... To avoid duplicate records, you need a PRIMARY KEY or a
UNIQUE key in the table. The REPLACE keyword causes old records
to be replaced with new ones when a new record duplicates an old record on
a unique key value.
If you get performance problems in making backups on your system, you can solve this by setting up replication and do the backups on the slave instead of on the master. See section 4.10.1 Introduction.
If you are using a Veritas filesystem, you can do:
FLUSH TABLES WITH READ LOCK.
mount vxfs snapshot.
UNLOCK TABLES.
BACKUP TABLE SyntaxBACKUP TABLE tbl_name[,tbl_name...] TO '/path/to/backup/directory'
Copies to the backup directory the minimum number of table files needed
to restore the table, after flushing any buffered changes to disk. Currently
works only for MyISAM tables.
For MyISAM tables, copies `.frm' (definition) and
`.MYD' (data) files. The index file can be rebuilt from those two.
Before using this command, please see section 4.4.1 Database Backups.
During the backup, a read lock will be held for each table, one at time,
as they are being backed up. If you want to back up several tables as
a snapshot, you must first issue LOCK TABLES obtaining a read
lock for each table in the group.
The command returns a table with the following columns:
| Column | Value |
| Table | Table name |
| Op | Always ``backup'' |
| Msg_type | One of status, error, info or warning.
|
| Msg_text | The message. |
Note that BACKUP TABLE is only available in MySQL
version 3.23.25 and later.
RESTORE TABLE SyntaxRESTORE TABLE tbl_name[,tbl_name...] FROM '/path/to/backup/directory'
Restores the table(s) from the backup that was made with
BACKUP TABLE. Existing tables will not be overwritten; if you
try to restore over an existing table, you will get an error. Restoring
will take longer than backing up due to the need to rebuild the index. The
more keys you have, the longer it will take. Just as BACKUP TABLE,
RESTORE TABLE currently works only for MyISAM tables.
The command returns a table with the following columns:
| Column | Value |
| Table | Table name |
| Op | Always ``restore'' |
| Msg_type | One of status, error, info or warning.
|
| Msg_text | The message. |
CHECK TABLE SyntaxCHECK TABLE tbl_name[,tbl_name...] [option [option...]] option = QUICK | FAST | MEDIUM | EXTENDED | CHANGED
CHECK TABLE works only on MyISAM and InnoDB tables. On
MyISAM tables it's the same thing as running myisamchk -m
table_name on the table.
If you don't specify any option MEDIUM is used.
Checks the table(s) for errors. For MyISAM tables the key statistics
are updated. The command returns a table with the following columns:
| Column | Value |
| Table | Table name. |
| Op | Always ``check''. |
| Msg_type | One of status, error, info, or warning.
|
| Msg_text | The message. |
Note that you can get many rows of information for each checked table. The
last row will be of Msg_type status and should normally be
OK. If you don't get OK, or Table is already up to
date you should normally run a repair of the table. See section 4.4.6 Using myisamchk for Table Maintenance and Crash Recovery. Table is already up to date means that the table the
given TYPE told MySQL that there wasn't any need to check the
table.
The different check types stand for the following:
| Type | Meaning |
QUICK | Don't scan the rows to check for wrong links. |
FAST | Only check tables which haven't been closed properly. |
CHANGED | Only check tables which have been changed since last check or haven't been closed properly. |
MEDIUM | Scan rows to verify that deleted links are okay. This also calculates a key checksum for the rows and verifies this with a calculated checksum for the keys. |
EXTENDED | Do a full key lookup for all keys for each row. This ensures that the table is 100% consistent, but will take a long time! |
For dynamically sized MyISAM tables a started check will always
do a MEDIUM check. For statically sized rows we skip the row scan
for QUICK and FAST as the rows are very seldom corrupted.
You can combine check options as in:
CHECK TABLE test_table FAST QUICK;
Which would simply do a quick check on the table to see whether it was closed properly.
Note: that in some case CHECK TABLE will change the
table! This happens if the table is marked as 'corrupted' or 'not
closed properly' but CHECK TABLE didn't find any problems in the
table. In this case CHECK TABLE will mark the table as okay.
If a table is corrupted, then it's most likely that the problem is in the indexes and not in the data part. All of the above check types checks the indexes thoroughly and should thus find most errors.
If you just want to check a table that you assume is okay, you should use
no check options or the QUICK option. The latter should be used
when you are in a hurry and can take the very small risk that
QUICK didn't find an error in the datafile. (In most cases
MySQL should find, under normal usage, any error in the data
file. If this happens then the table will be marked as 'corrupted',
in which case the table can't be used until it's repaired.)
FAST and CHANGED are mostly intended to be used from a
script (for example to be executed from cron) if you want to check your
table from time to time. In most cases you FAST is to be prefered
over CHANGED. (The only case when it isn't is when you suspect a
bug you have found a bug in the MyISAM code.)
EXTENDED is only to be used after you have run a normal check but
still get strange errors from a table when MySQL tries to
update a row or find a row by key (this is very unlikely if a
normal check has succeeded!).
Some things reported by check table, can't be corrected automatically:
Found row where the auto_increment column has the value 0.
This means that you have in the table a row where the
AUTO_INCREMENT index column contains the value 0.
(It's possible to create a row where the AUTO_INCREMENT column is 0 by
explicitly setting the column to 0 with an UPDATE statement)
This isn't an error in itself, but could cause trouble if you decide to
dump the table and restore it or do an ALTER TABLE on the
table. In this case the AUTO_INCREMENT column will change value,
according to the rules of AUTO_INCREMENT columns, which could cause
problems like a duplicate key error.
To get rid of the warning, just execute an UPDATE statement
to set the column to some other value than 0.
REPAIR TABLE SyntaxREPAIR [LOCAL | NO_WRITE_TO_BINLOG] TABLE tbl_name[,tbl_name...] [QUICK] [EXTENDED] [USE_FRM]
REPAIR TABLE works only on MyISAM tables and is the same
as running myisamchk -r table_name on the table.
Normally you should never have to run this command, but if disaster strikes
you are very likely to get back all your data from a MyISAM table with
REPAIR TABLE. If your tables get corrupted a lot you should
try to find the reason for this! See section A.4.1 What To Do If MySQL Keeps Crashing. See section 7.1.3 MyISAM Table Problems.
REPAIR TABLE repairs a possible corrupted table. The command returns a
table with the following columns:
| Column | Value |
| Table | Table name |
| Op | Always ``repair'' |
| Msg_type | One of status, error, info or warning.
|
| Msg_text | The message. |
Note that you can get many rows of information for each repaired
table. The last one row will be of Msg_type status and should
normally be OK. If you don't get OK, you should try
repairing the table with myisamchk -o, as REPAIR TABLE
does not yet implement all the options of myisamchk. In the near
future, we will make it more flexible.
If QUICK is given then MySQL will try to do a
REPAIR of only the index tree.
If you use EXTENDED then MySQL will create the index row
by row instead of creating one index at a time with sorting; this may be
better than sorting on fixed-length keys if you have long CHAR
keys that compress very well. This type of repair is like that done by
myisamchk --safe-recover.
As of MySQL 4.0.2, there is a USE_FRM mode for REPAIR.
Use it if the `.MYI' file is missing or if its header is corrupted.
In this mode MySQL will recreate the table, using information from the
`.frm' file. This kind of repair cannot be done with myisamchk.
Warning: If mysqld dies during a REPAIR TABLE,
it's essential that you do at once another REPAIR on the table
before executing any other commands on it. (It's of course always good
to start with a backup). In the worst case you can have a new clean
index file without information about the data file and when the next
command you do may overwrite the data file. This is not a likely, but
possible scenario.
Strictly before MySQL 4.1.1, REPAIR commands are not written
to the binary log. Since MySQL 4.1.1 they are written to the binary
log unless the optional NO_WRITE_TO_BINLOG keyword
(or its alias LOCAL) was used.
myisamchk for Table Maintenance and Crash Recovery
Starting with MySQL Version 3.23.13, you can check MyISAM
tables with the CHECK TABLE command. See section 4.4.4 CHECK TABLE Syntax. You can
repair tables with the REPAIR TABLE command. See section 4.4.5 REPAIR TABLE Syntax.
To check/repair MyISAM tables (`.MYI' and `.MYD') you should
use the myisamchk utility. To check/repair ISAM tables
(`.ISM' and `.ISD') you should use the isamchk
utility. See section 7 MySQL Table Types.
In the following text we will talk about myisamchk, but everything
also applies to the old isamchk.
You can use the myisamchk utility to get information about your
database tables, check and repair them, or optimise them. The following
sections describe how to invoke myisamchk (including a
description of its options), how to set up a table maintenance schedule,
and how to use myisamchk to perform its various functions.
You can, in most cases, also use the command OPTIMIZE TABLES to
optimise and repair tables, but this is not as fast or reliable (in case
of real fatal errors) as myisamchk. On the other hand,
OPTIMIZE TABLE is easier to use and you don't have to worry about
flushing tables.
See section 4.5.1 OPTIMIZE TABLE Syntax.
Even that the repair in myisamchk is quite secure, it's always a
good idea to make a backup before doing a repair (or anything that could
make a lot of changes to a table)
myisamchk Invocation Syntax
myisamchk is invoked like this:
shell> myisamchk [options] tbl_name
The options specify what you want myisamchk to do. They are
described here. (You can also get a list of options by invoking
myisamchk --help.) With no options, myisamchk simply checks your
table. To get more information or to tell myisamchk to take corrective
action, specify options as described here and in the following sections.
tbl_name is the database table you want to check/repair. If you run
myisamchk somewhere other than in the database directory, you must
specify the path to the file, because myisamchk has no idea where your
database is located. Actually, myisamchk doesn't care whether
the files you are working on are located in a database directory; you can
copy the files that correspond to a database table into another location and
perform recovery operations on them there.
You can name several tables on the myisamchk command-line if you
wish. You can also specify a name as an index file
name (with the `.MYI' suffix), which allows you to specify all
tables in a directory by using the pattern `*.MYI'.
For example, if you are in a database directory, you can check all the
tables in the directory like this:
shell> myisamchk *.MYI
If you are not in the database directory, you can check all the tables there by specifying the path to the directory:
shell> myisamchk /path/to/database_dir/*.MYI
You can even check all tables in all databases by specifying a wildcard with the path to the MySQL data directory:
shell> myisamchk /path/to/datadir/*/*.MYI
The recommended way to quickly check all tables is:
myisamchk --silent --fast /path/to/datadir/*/*.MYI isamchk --silent /path/to/datadir/*/*.ISM
If you want to check all tables and repair all tables that are corrupted, you can use the following line:
myisamchk --silent --force --fast --update-state -O key_buffer=64M \
-O sort_buffer=64M -O read_buffer=1M -O write_buffer=1M \
/path/to/datadir/*/*.MYI
isamchk --silent --force -O key_buffer=64M -O sort_buffer=64M \
-O read_buffer=1M -O write_buffer=1M /path/to/datadir/*/*.ISM
The above assumes that you have more than 64 M free.
Note that if you get an error like:
myisamchk: warning: 1 clients is using or hasn't closed the table properly
This means that you are trying to check a table that has been updated by
the another program (like the mysqld server) that hasn't yet closed
the file or that has died without closing the file properly.
If you mysqld is running, you must force a sync/close of all
tables with FLUSH TABLES and ensure that no one is using the
tables while you are running myisamchk. In MySQL Version 3.23
the easiest way to avoid this problem is to use CHECK TABLE
instead of myisamchk to check tables.
myisamchk
myisamchk supports the following options.
-# or --debug=debug_options
debug_options string often is
'd:t:o,filename'.
-? or --help
-O var=option, --set-variable var=option
--set-variable
is deprecated since MySQL 4.0, just use --var=option on its own.
The possible variables and their default values
for myisamchk can be examined with myisamchk --help:
| Variable | Value |
| key_buffer_size | 523264 |
| read_buffer_size | 262136 |
| write_buffer_size | 262136 |
| sort_buffer_size | 2097144 |
| sort_key_blocks | 16 |
| decode_bits | 9 |
sort_buffer_size is used when the keys are repaired by sorting
keys, which is the normal case when you use --recover.
key_buffer_size is used when you are checking the table with
--extended-check or when the keys are repaired by inserting key
row by row in to the table (like when doing normal inserts). Repairing
through the key buffer is used in the following cases:
--safe-recover.
CHAR, VARCHAR or TEXT keys as the
sort needs to store the whole keys during sorting. If you have lots
of temporary space and you can force myisamchk to repair by sorting
you can use the --sort-recover option.
-s or --silent
-s
twice (-ss) to make myisamchk very silent.
-v or --verbose
-d and
-e. Use -v multiple times (-vv, -vvv) for more
verbosity!
-V or --version
myisamchk version and exit.
-w or, --wait
mysqld
on the table with --skip-external-locking, the table can only be locked
by another myisamchk command.
myisamchk-c or --check
myisamchk any options that override this.
-e or --extend-check
myisamchk or myisamchk --medium-check should, in most
cases, be able to find out if there are any errors in the table.
If you are using --extended-check and have much memory, you should
increase the value of key_buffer_size a lot!
-F or --fast
-C or --check-only-changed
-f or --force
myisamchk with -r (repair) on the table, if
myisamchk finds any errors in the table.
-i or --information
-m or --medium-check
-U or --update-state
--check-only-changed option, but you shouldn't use this
option if the mysqld server is using the table and you are
running mysqld with --skip-external-locking.
-T or --read-only
myisamchk
to check a table that is in use by some other application that doesn't
use locking (like mysqld --skip-external-locking).
The following options are used if you start myisamchk with
-r or -o:
-D # or --data-file-length=#
-e or --extend-check
-f or --force
table_name.TMD) instead of aborting.
-k # or keys-used=#
# indexes. If you are using MyISAM, tells which keys
to use, where each binary bit stands for one key (first key is bit 0).
This can be used to get faster inserts! Deactivated indexes can be
reactivated by using myisamchk -r. keys.
-l or --no-symlinks
myisamchk repairs the
table a symlink points at. This option doesn't exist in MySQL 4.0,
as MySQL 4.0 will not remove symlinks during repair.
-r or --recover
-r, you
should then try -o. (Note that in the unlikely case that -r
fails, the datafile is still intact.)
If you have lots of memory, you should increase the size of
sort_buffer_size!
-o or --safe-recover
-r, but can handle a couple of very unlikely cases that
-r cannot handle. This recovery method also uses much less disk
space than -r. Normally one should always first repair with
-r, and only if this fails use -o.
If you have lots of memory, you should increase the size of
key_buffer_size!
-n or --sort-recover
myisamchk to use sorting to resolve the keys even if the
temporary files should be very big.
--character-sets-dir=...
--set-character-set=name
-t or --tmpdir=path
myisamchk will
use the environment variable TMPDIR for this.
Starting from MySQL 4.1, tmpdir can be set to a list of paths
separated by colon : (semicolon ; on Windows). They
will be used in round-robin fashion.
-q or --quick
-q to force myisamchk to modify the original datafile in case
of duplicate keys
-u or --unpack
myisamchk
Other actions that myisamchk can do, besides repair and check tables:
-a or --analyze
myisamchk --describe --verbose table_name' or using SHOW KEYS in
MySQL.
-d or --description
-A or --set-auto-increment[=value]
AUTO_INCREMENT to start at this or higher value. If no value is
given, then sets the next AUTO_INCREMENT value to the highest used value
for the auto key + 1.
-S or --sort-index
-R or --sort-records=#
SELECT and ORDER BY operations on
this index. (It may be very slow to do a sort the first time!)
To find out a table's index numbers, use SHOW INDEX, which shows a
table's indexes in the same order that myisamchk sees them. Indexes are
numbered beginning with 1.
myisamchk Memory Usage
Memory allocation is important when you run myisamchk.
myisamchk uses no more memory than you specify with the -O
options. If you are going to use myisamchk on very large files,
you should first decide how much memory you want it to use. The default
is to use only about 3M to fix things. By using larger values, you can
get myisamchk to operate faster. For example, if you have more
than 32M RAM, you could use options such as these (in addition to any
other options you might specify):
shell> myisamchk -O sort=16M -O key=16M -O read=1M -O write=1M ...
Using -O sort=16M should probably be enough for most cases.
Be aware that myisamchk uses temporary files in TMPDIR. If
TMPDIR points to a memory filesystem, you may easily get out of
memory errors. If this happens, set TMPDIR to point at some directory
with more space and restart myisamchk.
When repairing, myisamchk will also need a lot of disk space:
--quick, as in this
case only the index file will be re-created. This space is needed on the
same disk as the original record file!
--recover or --sort-recover
(but not when using --safe-recover), you will need space for a
sort buffer for:
(largest_key + row_pointer_length)*number_of_rows * 2.
You can check the length of the keys and the row_pointer_length with
myisamchk -dv table.
This space is allocated on the temporary disk (specified by TMPDIR or
--tmpdir=#).
If you have a problem with disk space during repair, you can try to use
--safe-recover instead of --recover.
myisamchk for Crash Recovery
If you run mysqld with --skip-external-locking (which is the
default on some systems, like Linux), you can't reliably use myisamchk
to check a table when mysqld is using the same table. If you
can be sure that no one is accessing the tables through mysqld
while you run myisamchk, you only have to do mysqladmin
flush-tables before you start checking the tables. If you can't
guarantee the above, then you must take down mysqld while you
check the tables. If you run myisamchk while mysqld is updating
the tables, you may get a warning that a table is corrupt even if it
isn't.
If you are not using --skip-external-locking, you can use
myisamchk to check tables at any time. While you do this, all clients
that try to update the table will wait until myisamchk is ready before
continuing.
If you use myisamchk to repair or optimise tables, you
must always ensure that the mysqld server is not using
the table (this also applies if you are using --skip-external-locking).
If you don't take down mysqld you should at least do a
mysqladmin flush-tables before you run myisamchk.
Your tables may be corrupted if the server and myisamchk
access the tables simultaneously.
This chapter describes how to check for and deal with data corruption in MySQL databases. If your tables get corrupted frequently you should try to find the reason for this! See section A.4.1 What To Do If MySQL Keeps Crashing.
The MyISAM table section contains reason for why a table could be
corrupted. See section 7.1.3 MyISAM Table Problems.
When performing crash recovery, it is important to understand that each table
tbl_name in a database corresponds to three files in the database
directory:
| File | Purpose |
| `tbl_name.frm' | Table definition (form) file |
| `tbl_name.MYD' | Datafile |
| `tbl_name.MYI' | Index file |
Each of these three file types is subject to corruption in various ways, but problems occur most often in datafiles and index files.
myisamchk works by creating a copy of the `.MYD' (data) file
row by row. It ends the repair stage by removing the old `.MYD'
file and renaming the new file to the original file name. If you use
--quick, myisamchk does not create a temporary `.MYD'
file, but instead assumes that the `.MYD' file is correct and only
generates a new index file without touching the `.MYD' file. This
is safe, because myisamchk automatically detects if the
`.MYD' file is corrupt and aborts the repair in this case. You can
also give two --quick options to myisamchk. In this case,
myisamchk does not abort on some errors (like duplicate key) but
instead tries to resolve them by modifying the `.MYD'
file. Normally the use of two --quick options is useful only if
you have too little free disk space to perform a normal repair. In this
case you should at least make a backup before running myisamchk.
To check a MyISAM table, use the following commands:
myisamchk tbl_name
myisamchk without options or
with either the -s or --silent option.
myisamchk -m tbl_name
myisamchk -e tbl_name
-e means
``extended check''). It does a check-read of every key for each row to verify
that they indeed point to the correct row. This may take a long time on a
big table with many keys. myisamchk will normally stop after the first
error it finds. If you want to obtain more information, you can add the
--verbose (-v) option. This causes myisamchk to keep
going, up through a maximum of 20 errors. In normal usage, a simple
myisamchk (with no arguments other than the table name) is sufficient.
myisamchk -e -i tbl_name
-i option tells myisamchk to
print some informational statistics, too.
In the following section we only talk about using myisamchk on
MyISAM tables (extensions `.MYI' and `.MYD'). If you
are using ISAM tables (extensions `.ISM' and `.ISD'),
you should use isamchk instead.
Starting with MySQL Version 3.23.14, you can repair MyISAM
tables with the REPAIR TABLE command. See section 4.4.5 REPAIR TABLE Syntax.
The symptoms of a corrupted table include queries that abort unexpectedly and observable errors such as these:
perror ###. Here
is the most common errors that indicates a problem with the table:
shell> perror 126 127 132 134 135 136 141 144 145 126 = Index file is crashed / Wrong file format 127 = Record-file is crashed 132 = Old database file 134 = Record was already deleted (or record file crashed) 135 = No more room in record file 136 = No more room in index file 141 = Duplicate unique key or constraint on write or update 144 = Table is crashed and last repair failed 145 = Table was marked as crashed and should be repairedNote that error 135, no more room in record file, is not an error that can be fixed by a simple repair. In this case you have to do:
ALTER TABLE table MAX_ROWS=xxx AVG_ROW_LENGTH=yyy;
In the other cases, you must repair your tables. myisamchk
can usually detect and fix most things that go wrong.
The repair process involves up to four stages, described here. Before you
begin, you should cd to the database directory and check the
permissions of the table files. Make sure they are readable by the Unix user
that mysqld runs as (and to you, because you need to access the files
you are checking). If it turns out you need to modify files, they must also
be writable by you.
If you are using MySQL Version 3.23.16 and above, you can (and
should) use the CHECK and REPAIR commands to check and repair
MyISAM tables. See section 4.4.4 CHECK TABLE Syntax. See section 4.4.5 REPAIR TABLE Syntax.
The manual section about table maintenance includes the options to
isamchk/myisamchk. See section 4.4.6 Using myisamchk for Table Maintenance and Crash Recovery.
The following section is for the cases where the above command fails or
if you want to use the extended features that isamchk/myisamchk provides.
If you are going to repair a table from the command-line, you must first
take down the mysqld server. Note that when you do
mysqladmin shutdown on a remote server, the mysqld server
will still be alive for a while after mysqladmin returns, until
all queries are stopped and all keys have been flushed to disk.
Stage 1: Checking your tables
Run myisamchk *.MYI or myisamchk -e *.MYI if you have
more time. Use the -s (silent) option to suppress unnecessary
information.
If the mysqld server is done you should use the --update option to tell
myisamchk to mark the table as 'checked'.
You have to repair only those tables for which myisamchk announces an
error. For such tables, proceed to Stage 2.
If you get weird errors when checking (such as out of
memory errors), or if myisamchk crashes, go to Stage 3.
Stage 2: Easy safe repair
Note: If you want repairing to go much faster, you should add: -O
sort_buffer=# -O key_buffer=# (where # is about 1/4 of the available
memory) to all isamchk/myisamchk commands.
First, try myisamchk -r -q tbl_name (-r -q means ``quick
recovery mode''). This will attempt to repair the index file without
touching the datafile. If the datafile contains everything that it
should and the delete links point at the correct locations within the
datafile, this should work, and the table is fixed. Start repairing the
next table. Otherwise, use the following procedure:
myisamchk -r tbl_name (-r means ``recovery mode''). This will
remove incorrect records and deleted records from the datafile and
reconstruct the index file.
myisamchk --safe-recover tbl_name.
Safe recovery mode uses an old recovery method that handles a few cases that
regular recovery mode doesn't (but is slower).
If you get weird errors when repairing (such as out of
memory errors), or if myisamchk crashes, go to Stage 3.
Stage 3: Difficult repair
You should only reach this stage if the first 16K block in the index file is destroyed or contains incorrect information, or if the index file is missing. In this case, it's necessary to create a new index file. Do so as follows:
shell> mysql db_name mysql> SET AUTOCOMMIT=1; mysql> TRUNCATE TABLE table_name; mysql> quitIf your SQL version doesn't have
TRUNCATE TABLE, use DELETE FROM
table_name instead.
Go back to Stage 2. myisamchk -r -q should work now. (This shouldn't
be an endless loop.)
As of MySQL 4.0.2 you can also use REPAIR ... USE_FRM
which performs the whole procedure automatically.
Stage 4: Very difficult repair
You should reach this stage only if the description file has also crashed. That should never happen, because the description file isn't changed after the table is created:
myisamchk -r.
To coalesce fragmented records and eliminate wasted space resulting from
deleting or updating records, run myisamchk in recovery mode:
shell> myisamchk -r tbl_name
You can optimise a table in the same way using the SQL OPTIMIZE TABLE
statement. OPTIMIZE TABLE does a repair of the table and a key
analysis, and also sorts the index tree to give faster key lookups.
There is also no possibility of unwanted interaction between a utility
and the server, because the server does all the work when you use
OPTIMIZE TABLE. See section 4.5.1 OPTIMIZE TABLE Syntax.
myisamchk also has a number of other options you can use to improve
the performance of a table:
-S, --sort-index
-R index_num, --sort-records=index_num
-a, --analyze
For a full description of the option. See section 4.4.6.1 myisamchk Invocation Syntax.
Starting with MySQL Version 3.23.13, you can check MyISAM
tables with the CHECK TABLE command. See section 4.4.4 CHECK TABLE Syntax. You can
repair tables with the REPAIR TABLE command. See section 4.4.5 REPAIR TABLE Syntax.
It is a good idea to perform table checks on a regular basis rather than
waiting for problems to occur. For maintenance purposes, you can use
myisamchk -s to check tables. The -s option (short for
--silent) causes myisamchk to run in silent mode, printing
messages only when errors occur.
It's also a good idea to check tables when the server starts up.
For example, whenever the machine has done a reboot in the middle of an
update, you usually need to check all the tables that could have been
affected. (This is an ``expected crashed table''.) You could add a test to
safe_mysqld that runs myisamchk to check all tables that have
been modified during the last 24 hours if there is an old `.pid'
(process ID) file left after a reboot. (The `.pid' file is created by
mysqld when it starts up and removed when it terminates normally. The
presence of a `.pid' file at system startup time indicates that
mysqld terminated abnormally.)
An even better test would be to check any table whose last-modified time is more recent than that of the `.pid' file.
You should also check your tables regularly during normal system
operation. At MySQL AB, we run a cron job to check all
our important tables once a week, using a line like this in a `crontab'
file:
35 0 * * 0 /path/to/myisamchk --fast --silent /path/to/datadir/*/*.MYI
This prints out information about crashed tables so we can examine and repair them when needed.
As we haven't had any unexpectedly crashed tables (tables that become corrupted for reasons other than hardware trouble) for a couple of years now (this is really true), once a week is more than enough for us.
We recommend that to start with, you execute myisamchk -s each
night on all tables that have been updated during the last 24 hours,
until you come to trust MySQL as much as we do.
Normally you don't need to maintain MySQL tables that much. If
you are changing tables with dynamic size rows (tables with VARCHAR,
BLOB or TEXT columns) or have tables with many deleted rows
you may want to from time to time (once a month?) defragment/reclaim space
from the tables.
You can do this by using OPTIMIZE TABLE on the tables in question or
if you can take the mysqld server down for a while do:
isamchk -r --silent --sort-index -O sort_buffer_size=16M */*.ISM myisamchk -r --silent --sort-index -O sort_buffer_size=16M */*.MYI
To get a description of a table or statistics about it, use the commands shown here. We explain some of the information in more detail later:
myisamchk in ``describe mode'' to produce a description of
your table. If you start the MySQL server using the
--skip-external-locking option, myisamchk may report an error
for a table that is updated while it runs. However, because myisamchk
doesn't change the table in describe mode, there isn't any risk of
destroying data.
myisamchk is doing, add -v
to tell it to run in verbose mode.
-eis, but tells you what is being done.
Example of myisamchk -d output:
MyISAM file: company.MYI
Record format: Fixed length
Data records: 1403698 Deleted blocks: 0
Recordlength: 226
table description:
Key Start Len Index Type
1 2 8 unique double
2 15 10 multip. text packed stripped
3 219 8 multip. double
4 63 10 multip. text packed stripped
5 167 2 multip. unsigned short
6 177 4 multip. unsigned long
7 155 4 multip. text
8 138 4 multip. unsigned long
9 177 4 multip. unsigned long
193 1 text
Example of myisamchk -d -v output:
MyISAM file: company
Record format: Fixed length
File-version: 1
Creation time: 1999-10-30 12:12:51
Recover time: 1999-10-31 19:13:01
Status: checked
Data records: 1403698 Deleted blocks: 0
Datafile parts: 1403698 Deleted data: 0
Datafilepointer (bytes): 3 Keyfile pointer (bytes): 3
Max datafile length: 3791650815 Max keyfile length: 4294967294
Recordlength: 226
table description:
Key Start Len Index Type Rec/key Root Blocksize
1 2 8 unique double 1 15845376 1024
2 15 10 multip. text packed stripped 2 25062400 1024
3 219 8 multip. double 73 40907776 1024
4 63 10 multip. text packed stripped 5 48097280 1024
5 167 2 multip. unsigned short 4840 55200768 1024
6 177 4 multip. unsigned long 1346 65145856 1024
7 155 4 multip. text 4995 75090944 1024
8 138 4 multip. unsigned long 87 85036032 1024
9 177 4 multip. unsigned long 178 96481280 1024
193 1 text
Example of myisamchk -eis output:
Checking MyISAM file: company Key: 1: Keyblocks used: 97% Packed: 0% Max levels: 4 Key: 2: Keyblocks used: 98% Packed: 50% Max levels: 4 Key: 3: Keyblocks used: 97% Packed: 0% Max levels: 4 Key: 4: Keyblocks used: 99% Packed: 60% Max levels: 3 Key: 5: Keyblocks used: 99% Packed: 0% Max levels: 3 Key: 6: Keyblocks used: 99% Packed: 0% Max levels: 3 Key: 7: Keyblocks used: 99% Packed: 0% Max levels: 3 Key: 8: Keyblocks used: 99% Packed: 0% Max levels: 3 Key: 9: Keyblocks used: 98% Packed: 0% Max levels: 4 Total: Keyblocks used: 98% Packed: 17% Records: 1403698 M.recordlength: 226 Packed: 0% Recordspace used: 100% Empty space: 0% Blocks/Record: 1.00 Record blocks: 1403698 Delete blocks: 0 Recorddata: 317235748 Deleted data: 0 Lost space: 0 Linkdata: 0 User time 1626.51, System time 232.36 Maximum resident set size 0, Integral resident set size 0 Non physical pagefaults 0, Physical pagefaults 627, Swaps 0 Blocks in 0 out 0, Messages in 0 out 0, Signals 0 Voluntary context switches 639, Involuntary context switches 28966
Example of myisamchk -eiv output:
Checking MyISAM file: company Data records: 1403698 Deleted blocks: 0 - check file-size - check delete-chain block_size 1024: index 1: index 2: index 3: index 4: index 5: index 6: index 7: index 8: index 9: No recordlinks - check index reference - check data record references index: 1 Key: 1: Keyblocks used: 97% Packed: 0% Max levels: 4 - check data record references index: 2 Key: 2: Keyblocks used: 98% Packed: 50% Max levels: 4 - check data record references index: 3 Key: 3: Keyblocks used: 97% Packed: 0% Max levels: 4 - check data record references index: 4 Key: 4: Keyblocks used: 99% Packed: 60% Max levels: 3 - check data record references index: 5 Key: 5: Keyblocks used: 99% Packed: 0% Max levels: 3 - check data record references index: 6 Key: 6: Keyblocks used: 99% Packed: 0% Max levels: 3 - check data record references index: 7 Key: 7: Keyblocks used: 99% Packed: 0% Max levels: 3 - check data record references index: 8 Key: 8: Keyblocks used: 99% Packed: 0% Max levels: 3 - check data record references index: 9 Key: 9: Keyblocks used: 98% Packed: 0% Max levels: 4 Total: Keyblocks used: 9% Packed: 17% - check records and index references [LOTS OF ROW NUMBERS DELETED] Records: 1403698 M.recordlength: 226 Packed: 0% Recordspace used: 100% Empty space: 0% Blocks/Record: 1.00 Record blocks: 1403698 Delete blocks: 0 Recorddata: 317235748 Deleted data: 0 Lost space: 0 Linkdata: 0 User time 1639.63, System time 251.61 Maximum resident set size 0, Integral resident set size 0 Non physical pagefaults 0, Physical pagefaults 10580, Swaps 0 Blocks in 4 out 0, Messages in 0 out 0, Signals 0 Voluntary context switches 10604, Involuntary context switches 122798
Here are the sizes of the data and index files for the table used in the preceding examples:
-rw-rw-r-- 1 monty tcx 317235748 Jan 12 17:30 company.MYD -rw-rw-r-- 1 davida tcx 96482304 Jan 12 18:35 company.MYM
Explanations for the types of information myisamchk produces are
given here. The ``keyfile'' is the index file. ``Record'' and ``row''
are synonymous:
Data
records.
Fixed length.
Other possible values are Compressed and Packed.
unique or multip. (multiple). Indicates whether one value
can exist multiple times in this index.
packed, stripped or empty.
myisamchk -a. If this is not updated at all, a default
value of 30 is given.
myisamchk, the values are very
high (very near the theoretical maximum).
CHAR/VARCHAR/DECIMAL keys. For long strings like
names, this can significantly reduce the space used. In the third example
above, the 4th key is 10 characters long and a 60% reduction in space is
achieved.
Packed
value indicates the percentage of savings achieved by doing this.
myisamchk.
See section 4.4.6.10 Table Optimisation.
Linkdata is the sum of the amount of
storage used by all such pointers.
If a table has been compressed with myisampack, myisamchk
-d prints additional information about each table column. See
section 4.7.4 myisampack, The MySQL Compressed Read-only Table Generator, for an example of this
information and a description of what it means.
OPTIMIZE TABLE SyntaxOPTIMIZE [LOCAL | NO_WRITE_TO_BINLOG] TABLE tbl_name[,tbl_name]...
OPTIMIZE TABLE should be used if you have deleted a large part of a
table or if you have made many changes to a table with variable-length rows
(tables that have VARCHAR, BLOB, or TEXT columns).
Deleted records are maintained in a linked list and subsequent INSERT
operations reuse old record positions. You can use OPTIMIZE TABLE to
reclaim the unused space and to defragment the datafile.
For the moment, OPTIMIZE TABLE works only on MyISAM and
BDB tables. For BDB tables, OPTIMIZE TABLE is
currently mapped to ANALYZE TABLE.
See section 4.5.2 ANALYZE TABLE Syntax.
You can get OPTIMIZE TABLE to work on other table types by starting
mysqld with --skip-new or --safe-mode, but in this
case OPTIMIZE TABLE is just mapped to ALTER TABLE.
OPTIMIZE TABLE works the following way:
Note that the table is locked during the time OPTIMIZE TABLE is
running!
Strictly before MySQL 4.1.1, OPTIMIZE commands are not written
to the binary log. Since MySQL 4.1.1 they are written to the binary
log unless the optional NO_WRITE_TO_BINLOG keyword
(or its alias LOCAL) was used.
ANALYZE TABLE SyntaxANALYZE [LOCAL | NO_WRITE_TO_BINLOG] TABLE tbl_name[,tbl_name...]
Analyse and store the key distribution for the table. During the
analysis, the table is locked with a read lock. This works on
MyISAM and BDB tables.
This is equivalent to running myisamchk -a on the table.
MySQL uses the stored key distribution to decide in which order tables should be joined when one does a join on something else than a constant.
The command returns a table with the following columns:
| Column | Value |
| Table | Table name |
| Op | Always ``analyze'' |
| Msg_type | One of status, error, info or warning.
|
| Msg_text | The message. |
You can check the stored key distribution with the SHOW INDEX command.
See section 4.5.7.1 Retrieving information about Database, Tables, Columns, and Indexes.
If the table hasn't changed since the last ANALYZE TABLE command,
the table will not be analysed again.
Strictly before MySQL 4.1.1, ANALYZE commands are not written
to the binary log. Since MySQL 4.1.1 they are written to the binary
log unless the optional NO_WRITE_TO_BINLOG keyword
(or its alias LOCAL) was used.
FLUSH SyntaxFLUSH [LOCAL | NO_WRITE_TO_BINLOG] flush_option [,flush_option] ...
You should use the FLUSH command if you want to clear some of the
internal caches MySQL uses. To execute FLUSH, you must have
the RELOAD privilege.
flush_option can be any of the following:
| Option | Description |
HOSTS | Empties the host cache tables. You should flush the
host tables if some of your hosts change IP number or if you get the
error message Host ... is blocked. When more than
max_connect_errors errors occur in a row for a given host while
connection to the MySQL server, MySQL assumes
something is wrong and blocks the host from further connection requests.
Flushing the host tables allows the host to attempt to connect
again. See section A.2.4 Host '...' is blocked Error. You can start mysqld with
-O max_connect_errors=999999999 to avoid this error message.
|
DES_KEY_FILE | Reloads the DES keys from the file that was
specified with the --des-key-file option at server startup time.
|
LOGS | Closes and reopens all log files.
If you have specified an update log file or a binary log file without
an extension, the extension number of the log file will be incremented
by one relative to the previous file. If you have used an extension in
the file name, MySQL will close and reopen the update log file.
See section 4.9.3 The Update Log. This is the same thing as sending the SIGHUP
signal to the mysqld server.
|
PRIVILEGES | Reloads the privileges from the grant tables in
the mysql database.
|
QUERY CACHE | Defragment the query cache to better utilise its
memory. This command will not remove any queries from the cache, unlike
RESET QUERY CACHE.
|
TABLES | Closes all open tables and force all tables in use to be closed. This also flushes the query cache. |
[TABLE | TABLES] tbl_name [,tbl_name...] | Flushes only the given tables. |
TABLES WITH READ LOCK | Closes all open tables and locks all tables for all databases with a read lock until you execute UNLOCK TABLES. This is very convenient way to get backups if you have a filesystem, like Veritas, that can take snapshots in time.
|
STATUS | Resets most status variables to zero. This is something one should only use when debugging a query. |
USER_RESOURCES | Resets all user resources to zero. This will enable blocked users to login again. See section 4.3.6 Limiting user resources. |
Strictly before MySQL 4.1.1, FLUSH commands are not written
to the binary log. Since MySQL 4.1.1 they are written to the binary
log unless the optional NO_WRITE_TO_BINLOG keyword
(or its alias LOCAL) was used, or
unless the command contained one of these arguments: LOGS,
MASTER, SLAVE, TABLES WITH READ LOCK, because any
of these arguments may cause problems if replicated to a slave.
You can also access some of the commands shown above with the mysqladmin
utility, using the flush-hosts, flush-logs, flush-privileges,
flush-status or flush-tables commands.
Take also a look at the RESET command used with replication.
See section 4.5.4 RESET Syntax.
RESET SyntaxRESET reset_option [,reset_option] ...
The RESET command is used to clear things. It also acts as an stronger
version of the FLUSH command. See section 4.5.3 FLUSH Syntax.
To execute RESET, you must have the RELOAD privilege.
| Option | Description |
MASTER | Deletes all binary logs listed in the index file, resetting the binlog
index file to be empty. Previously named FLUSH MASTER. See section 4.10.7 SQL Commands Related to Replication.
|
SLAVE | Makes the slave forget its replication position in the master
logs. Previously named FLUSH SLAVE. See section 4.10.7 SQL Commands Related to Replication.
|
QUERY CACHE | Removes all query results from the query cache. |
PURGE [MASTER] LOGS SyntaxPURGE [MASTER] LOGS TO binlog_name PURGE [MASTER] LOGS BEFORE date
MASTER is a useless keyword which can be specified or not.
This command is used to delete all binary logs strictly prior to the
specified binlog or date, see section 4.10.7 SQL Commands Related to Replication.
KILL SyntaxKILL thread_id
Each connection to mysqld runs in a separate thread. You can see
which threads are running with the SHOW PROCESSLIST command and kill
a thread with the KILL thread_id command.
If you have the PROCESS privilege, you can see all threads.
If you have the SUPER privilege, you can kill all threads.
Otherwise, you can only see and kill your own threads.
You can also use the mysqladmin processlist and mysqladmin kill
commands to examine and kill threads.
Note: You currently cannot use KILL with the Embedded MySQL
Server library, because the embedded server merely runs inside the threads
of the host application, it does not create connection threads of its own.
When you do a KILL, a thread-specific kill flag is set for
the thread.
In most cases it may take some time for the thread to die as the kill flag is only checked at specific intervals.
SELECT, ORDER BY and GROUP BY loops, the flag is
checked after reading a block of rows. If the kill flag is set, the
statement is aborted.
ALTER TABLE the kill flag is checked before each block of
rows are read from the original table. If the kill flag was set the command
is aborted and the temporary table is deleted.
UPDATE or DELETE, the kill flag
is checked after each block read and after each updated or deleted
row. If the kill flag is set, the statement is aborted. Note that if you
are not using transactions, the changes will not be rolled back!
GET_LOCK() will abort with NULL.
INSERT DELAYED thread will quickly flush all rows it has in
memory and die.
Locked),
the table lock will be quickly aborted.
write call, the
write is aborted with an disk full error message.
SHOW SyntaxSHOW DATABASES [LIKE wild] or SHOW [OPEN] TABLES [FROM db_name] [LIKE wild] or SHOW [FULL] COLUMNS FROM tbl_name [FROM db_name] [LIKE wild] or SHOW INDEX FROM tbl_name [FROM db_name] or SHOW TABLE STATUS [FROM db_name] [LIKE wild] or SHOW STATUS [LIKE wild] or SHOW VARIABLES [LIKE wild] or SHOW LOGS or SHOW [FULL] PROCESSLIST or SHOW GRANTS FOR user or SHOW CREATE TABLE table_name or SHOW MASTER STATUS or SHOW MASTER LOGS or SHOW SLAVE STATUS or SHOW WARNINGS [LIMIT #] or SHOW ERRORS [LIMIT #] or SHOW TABLE TYPES
SHOW provides information about databases, tables, columns, or
status information about the server. If the LIKE wild part is
used, the wild string can be a string that uses the SQL `%'
and `_' wildcard characters.
You can use db_name.tbl_name as an alternative to the tbl_name
FROM db_name syntax. These two statements are equivalent:
mysql> SHOW INDEX FROM mytable FROM mydb; mysql> SHOW INDEX FROM mydb.mytable;
SHOW DATABASES lists the databases on the MySQL server host.
You can also get this list using the mysqlshow command line tool.
In version 4.0.2 you will only see those databases for which you have some
kind of privilege, if you don't have the global SHOW DATABASES
privilege.
SHOW TABLES lists the tables in a given database. You can also
get this list using the mysqlshow db_name command.
Note: if a user doesn't have any privileges for a table, the table
will not show up in the output from SHOW TABLES or mysqlshow
db_name.
SHOW OPEN TABLES lists the tables that are currently open in
the table cache. See section 5.4.7 How MySQL Opens and Closes Tables. The Comment field tells
how many times the table is cached and in_use.
SHOW COLUMNS lists the columns in a given table. If you specify
the FULL option, you will also get the privileges you have for
each column. If the column types are different from what you expect them to
be based on a CREATE TABLE statement, note that MySQL
sometimes changes column types. See section 6.5.3.1 Silent Column Specification Changes.
The DESCRIBE statement provides information similar to
SHOW COLUMNS.
See section 6.6.2 DESCRIBE Syntax (Get Information About Columns).
SHOW FIELDS is a synonym for SHOW COLUMNS, and
SHOW KEYS is a synonym for SHOW INDEX. You can also
list a table's columns or indexes with mysqlshow db_name tbl_name
or mysqlshow -k db_name tbl_name.
SHOW INDEX returns the index information in a format that closely
resembles the SQLStatistics call in ODBC. The following columns
are returned:
| Column | Meaning |
Table | Name of the table. |
Non_unique | 0 if the index can't contain duplicates. |
Key_name | Name of the index. |
Seq_in_index | Column sequence number in index, starting with 1. |
Column_name | Column name. |
Collation | How the column is sorted in the index.
In MySQL, this can have values
`A' (Ascending) or NULL (Not
sorted).
|
Cardinality | Number of unique values in the index.
This is updated by running
isamchk -a.
|
Sub_part | Number of indexed characters if the
column is only partly indexed.
NULL if the entire key is indexed.
|
Null | Contains 'YES' if the column may contain NULL.
|
Index_type | Index method used. |
Comment | Various remarks. For now, it tells
in MySQL < 4.0.2 whether index is FULLTEXT or not.
|
Note that as the Cardinality is counted based on statistics
stored as integers, it's not necessarily accurate for small tables.
The Null and Index_type columns were added in MySQL 4.0.2.
SHOW TABLE STATUSSHOW TABLE STATUS [FROM db_name] [LIKE wild]
SHOW TABLE STATUS (new in Version 3.23) works likes SHOW
STATUS, but provides a lot of information about each table. You can
also get this list using the mysqlshow --status db_name command.
The following columns are returned:
| Column | Meaning |
Name | Name of the table. |
Type | Type of table. See section 7 MySQL Table Types. |
Row_format | The row storage format (Fixed, Dynamic, or Compressed). |
Rows | Number of rows. |
Avg_row_length | Average row length. |
Data_length | Length of the datafile. |
Max_data_length | Max length of the datafile. For fixed row formats, this is the max number of rows in the table. For dynamic row formats, this is the total number of data bytes that can be stored in the table, given the data pointer size used. |
Index_length | Length of the index file. |
Data_free | Number of allocated but not used bytes. |
Auto_increment | Next autoincrement value. |
Create_time | When the table was created. |
Update_time | When the datafile was last updated. |
Check_time | When the table was last checked. |
Create_options | Extra options used with CREATE TABLE.
|
Comment | The comment used when creating the table (or some information why MySQL couldn't access the table information). |
InnoDB tables will report the free space in the tablespace
in the table comment.
SHOW STATUS
SHOW STATUS provides server status information
(like mysqladmin extended-status). The output resembles that shown
here, though the format and numbers probably differ:
+--------------------------+------------+ | Variable_name | Value | +--------------------------+------------+ | Aborted_clients | 0 | | Aborted_connects | 0 | | Bytes_received | 155372598 | | Bytes_sent | 1176560426 | | Connections | 30023 | | Created_tmp_disk_tables | 0 | | Created_tmp_tables | 8340 | | Created_tmp_files | 60 | | Delayed_insert_threads | 0 | | Delayed_writes | 0 | | Delayed_errors | 0 | | Flush_commands | 1 | | Handler_delete | 462604 | | Handler_read_first | 105881 | | Handler_read_key | 27820558 | | Handler_read_next | 390681754 | | Handler_read_prev | 6022500 | | Handler_read_rnd | 30546748 | | Handler_read_rnd_next | 246216530 | | Handler_update | 16945404 | | Handler_write | 60356676 | | Key_blocks_used | 14955 | | Key_read_requests | 96854827 | | Key_reads | 162040 | | Key_write_requests | 7589728 | | Key_writes | 3813196 | | Max_used_connections | 0 | | Not_flushed_key_blocks | 0 | | Not_flushed_delayed_rows | 0 | | Open_tables | 1 | | Open_files | 2 | | Open_streams | 0 | | Opened_tables | 44600 | | Questions | 2026873 | | Select_full_join | 0 | | Select_full_range_join | 0 | | Select_range | 99646 | | Select_range_check | 0 | | Select_scan | 30802 | | Slave_running | OFF | | Slave_open_temp_tables | 0 | | Slow_launch_threads | 0 | | Slow_queries | 0 | | Sort_merge_passes | 30 | | Sort_range | 500 | | Sort_rows | 30296250 | | Sort_scan | 4650 | | Table_locks_immediate | 1920382 | | Table_locks_waited | 0 | | Threads_cached | 0 | | Threads_created | 30022 | | Threads_connected | 1 | | Threads_running | 1 | | Uptime | 80380 | +--------------------------+------------+
The status variables listed above have the following meaning:
| Variable | Meaning |
Aborted_clients | Number of connections aborted because the client died without closing the connection properly. See section A.2.9 Communication Errors / Aborted Connection. |
Aborted_connects | Number of tries to connect to the MySQL server that failed. See section A.2.9 Communication Errors / Aborted Connection. |
Bytes_received | Number of bytes received from all clients. |
Bytes_sent | Number of bytes sent to all clients. |
Com_xxx | Number of times each xxx command has been executed. |
Connections | Number of connection attempts to the MySQL server. |
Created_tmp_disk_tables | Number of implicit temporary tables on disk created while executing statements. |
Created_tmp_tables | Number of implicit temporary tables in memory created while executing statements. |
Created_tmp_files | How many temporary files mysqld has created.
|
Delayed_insert_threads | Number of delayed insert handler threads in use. |
Delayed_writes | Number of rows written with INSERT DELAYED.
|
Delayed_errors | Number of rows written with INSERT DELAYED for which some error occurred (probably duplicate key).
|
Flush_commands | Number of executed FLUSH commands.
|
Handler_commit | Number of internal COMMIT commands.
|
Handler_delete | Number of times a row was deleted from a table. |
Handler_read_first | Number of times the first entry was read from an index.
If this is high, it suggests that the server is doing a lot of full index scans, for example,
SELECT col1 FROM foo, assuming that col1 is indexed.
|
Handler_read_key | Number of requests to read a row based on a key. If this is high, it is a good indication that your queries and tables are properly indexed. |
Handler_read_next | Number of requests to read next row in key order. This will be incremented if you are querying an index column with a range constraint. This also will be incremented if you are doing an index scan. |
Handler_read_prev | Number of requests to read previous row in key order. This is mainly used to optimise ORDER BY ... DESC.
|
Handler_read_rnd | Number of requests to read a row based on a fixed position. This will be high if you are doing a lot of queries that require sorting of the result. |
Handler_read_rnd_next | Number of requests to read the next row in the datafile. This will be high if you are doing a lot of table scans. Generally this suggests that your tables are not properly indexed or that your queries are not written to take advantage of the indexes you have. |
Handler_rollback | Number of internal ROLLBACK commands.
|
Handler_update | Number of requests to update a row in a table. |
Handler_write | Number of requests to insert a row in a table. |
Key_blocks_used | The number of used blocks in the key cache. |
Key_read_requests | The number of requests to read a key block from the cache. |
Key_reads | The number of physical reads of a key block from disk. |
Key_write_requests | The number of requests to write a key block to the cache. |
Key_writes | The number of physical writes of a key block to disk. |
Max_used_connections | The maximum number of connections in use simultaneously. |
Not_flushed_key_blocks | Keys blocks in the key cache that has changed but hasn't yet been flushed to disk. |
Not_flushed_delayed_rows | Number of rows waiting to be written in INSERT DELAY queues.
|
Open_tables | Number of tables that are open. |
Open_files | Number of files that are open. |
Open_streams | Number of streams that are open (used mainly for logging). |
Opened_tables | Number of tables that have been opened. |
Rpl_status | Status of failsafe replication. (Not yet in use). |
Select_full_join | Number of joins without keys (If this is not 0, you should carefully check the indexes of your tables). |
Select_full_range_join | Number of joins where we used a range search on reference table. |
Select_range | Number of joins where we used ranges on the first table. (It's normally not critical even if this is big.) |
Select_scan | Number of joins where we did a full scan of the first table. |
Select_range_check | Number of joins without keys where we check for key usage after each row (If this is not 0, you should carefully check the indexes of your tables). |
Questions | Number of queries sent to the server. |
Slave_open_temp_tables | Number of temporary tables currently open by the slave thread |
Slave_running | Is ON if this is a slave that is connected to a master.
|
Slow_launch_threads | Number of threads that have taken more than slow_launch_time to create.
|
Slow_queries | Number of queries that have taken more than long_query_time. See section 4.9.5 The Slow Query Log.
|
Sort_merge_passes | Number of merges passes the sort algoritm have had to do. If this value is large you should consider increasing sort_buffer.
|
Sort_range | Number of sorts that were done with ranges. |
Sort_rows | Number of sorted rows. |
Sort_scan | Number of sorts that were done by scanning the table. |
ssl_xxx | Variables used by SSL; Not yet implemented. |
Table_locks_immediate | Number of times a table lock was acquired immediately. Available after 3.23.33. |
Table_locks_waited | Number of times a table lock could not be acquired immediately and a wait was needed. If this is high, and you have performance problems, you should first optimise your queries, and then either split your table(s) or use replication. Available after 3.23.33. |
Threads_cached | Number of threads in the thread cache. |
Threads_connected | Number of currently open connections. |
Threads_created | Number of threads created to handle connections. |
Threads_running | Number of threads that are not sleeping. |
Uptime | How many seconds the server has been up. |
Some comments about the above:
Opened_tables is big, then your table_cache
variable is probably too small.
Key_reads is big, then your key_buffer_size variable is
probably too small. The cache miss rate can be calculated with
Key_reads/Key_read_requests.
Handler_read_rnd is big, then you probably have a lot of
queries that require MySQL to scan whole tables or you have
joins that don't use keys properly.
Threads_created is big, you may want to increase the
thread_cache_size variable. The cache hit rate can be calculated
with Threads_created/Connections.
Created_tmp_disk_tables is big, you may want to increase the
tmp_table_size variable to get the temporary tables memory-based
instead of disk based.
SHOW VARIABLESSHOW [GLOBAL | SESSION] VARIABLES [LIKE wild]
SHOW VARIABLES shows the values of some MySQL system
variables. You can also get this information using the mysqladmin
variables command. If the default values are unsuitable, you can set most
of these variables using command-line options when mysqld starts up.
See section 4.1.1 mysqld Command-line Options.
The options GLOBAL and SESSION are new in MySQL 4.0.3.
With GLOBAL you will get the variables that will be used for new
connections to MySQL. With SESSION you will get the values that
are in effect for the current connection. If you are not using either
option, SESSION is used.
You can change most options with the SET command.
See section 5.5.6 SET Syntax.
The output resembles that shown here, though the format and numbers may differ somewhat:
+---------------------------------+------------------------------+ | Variable_name | Value | +---------------------------------+------------------------------| | back_log | 50 | | basedir | /usr/local/mysql | | bdb_cache_size | 8388572 | | bdb_log_buffer_size | 32768 | | bdb_home | /usr/local/mysql | | bdb_max_lock | 10000 | | bdb_logdir | | | bdb_shared_data | OFF | | bdb_tmpdir | /tmp/ | | bdb_version | Sleepycat Software: ... | | binlog_cache_size | 32768 | | bulk_insert_buffer_size | 8388608 | | character_set | latin1 | | character_sets | latin1 big5 czech euc_kr | | concurrent_insert | ON | | connect_timeout | 5 | | convert_character_set | | | datadir | /usr/local/mysql/data/ | | delay_key_write | ON | | delayed_insert_limit | 100 | | delayed_insert_timeout | 300 | | delayed_queue_size | 1000 | | flush | OFF | | flush_time | 0 | | ft_boolean_syntax | + -><()~*:""&| | | ft_min_word_len | 4 | | ft_max_word_len | 254 | | ft_max_word_len_for_sort | 20 | | ft_stopword_file | (built-in) | | have_bdb | YES | | have_innodb | YES | | have_isam | YES | | have_raid | NO | | have_symlink | DISABLED | | have_openssl | YES | | have_query_cache | YES | | init_file | | | innodb_additional_mem_pool_size | 1048576 | | innodb_buffer_pool_size | 8388608 | | innodb_data_file_path | ibdata1:10M:autoextend | | innodb_data_home_dir | | | innodb_file_io_threads | 4 | | innodb_force_recovery | 0 | | innodb_thread_concurrency | 8 | | innodb_flush_log_at_trx_commit | 1 | | innodb_fast_shutdown | ON | | innodb_flush_method | | | innodb_lock_wait_timeout | 50 | | innodb_log_arch_dir | | | innodb_log_archive | OFF | | innodb_log_buffer_size | 1048576 | | innodb_log_file_size | 5242880 | | innodb_log_files_in_group | 2 | | innodb_log_group_home_dir | ./ | | innodb_mirrored_log_groups | 1 | | interactive_timeout | 28800 | | join_buffer_size | 131072 | | key_buffer_size | 16773120 | | language | /usr/local/mysql/share/... | | large_files_support | ON | | local_infile | ON | | locked_in_memory | OFF | | log | OFF | | log_update | OFF | | log_bin | OFF | | log_slave_updates | OFF | | log_slow_queries | OFF | | log_warnings | OFF | | long_query_time | 10 | | low_priority_updates | OFF | | lower_case_table_names | OFF | | max_allowed_packet | 1047552 | | max_binlog_cache_size | 4294967295 | | max_binlog_size | 1073741824 | | max_connections | 100 | | max_connect_errors | 10 | | max_delayed_threads | 20 | | max_heap_table_size | 16777216 | | max_join_size | 4294967295 | | max_sort_length | 1024 | | max_user_connections | 0 | | max_tmp_tables | 32 | | max_write_lock_count | 4294967295 | | myisam_max_extra_sort_file_size | 268435456 | | myisam_repair_threads | 1 | | myisam_max_sort_file_size | 2147483647 | | myisam_recover_options | force | | myisam_sort_buffer_size | 8388608 | | net_buffer_length | 16384 | | net_read_timeout | 30 | | net_retry_count | 10 | | net_write_timeout | 60 | | open_files_limit | 0 | | pid_file | /usr/local/mysql/name.pid | | port | 3306 | | protocol_version | 10 | | read_buffer_size | 131072 | | read_rnd_buffer_size | 262144 | | rpl_recovery_rank | 0 | | query_cache_limit | 1048576 | | query_cache_size | 0 | | query_cache_type | ON | | safe_show_database | OFF | | server_id | 0 | | slave_net_timeout | 3600 | | skip_external_locking | ON | | skip_networking | OFF | | skip_show_database | OFF | | slow_launch_time | 2 | | socket | /tmp/mysql.sock | | sort_buffer_size | 2097116 | | sql_mode | 0 | | table_cache | 64 | | table_type | MYISAM | | thread_cache_size | 3 | | thread_stack | 131072 | | tx_isolation | READ-COMMITTED | | timezone | EEST | | tmp_table_size | 33554432 | | tmpdir | /tmp/:/mnt/hd2/tmp/ | | version | 4.0.4-beta | | wait_timeout | 28800 | +---------------------------------+------------------------------+
Each option is described here. Values for buffer sizes, lengths, and stack
sizes are given in bytes. You can specify values with a suffix of `K'
or `M' to indicate kilobytes or megabytes. For example, 16M
indicates 16 megabytes. The case of suffix letters does not matter;
16M and 16m are equivalent:
ansi_mode.
Is ON if mysqld was started with --ansi.
See section 1.8.2 Running MySQL in ANSI Mode.
back_log
The number of outstanding connection requests MySQL can have. This
comes into play when the main MySQL thread gets very
many connection requests in a very short time. It then takes some time
(although very little) for the main thread to check the connection and start
a new thread. The back_log value indicates how many requests can be
stacked during this short time before MySQL momentarily stops
answering new requests. You need to increase this only if you expect a large
number of connections in a short period of time.
In other words, this value is the size of the listen queue for incoming
TCP/IP connections. Your operating system has its own limit on the size
of this queue. The manual page for the Unix listen(2) system
call should have more details. Check your OS documentation for the
maximum value for this variable. Attempting to set back_log
higher than your operating system limit will be ineffective.
basedir
The value of the --basedir option.
bdb_cache_size
The buffer that is allocated to cache index and rows for BDB
tables. If you don't use BDB tables, you should start
mysqld with --skip-bdb to not waste memory for this
cache.
bdb_log_buffer_size
The buffer that is allocated to cache index and rows for BDB
tables. If you don't use BDB tables, you should set this to 0 or
start mysqld with --skip-bdb to not waste memory for this
cache.
bdb_home
The value of the --bdb-home option.
bdb_max_lock
The maximum number of locks (10,000 by default) you can have active on a
BDB table. You should increase this if you get errors of type bdb:
Lock table is out of available locks or Got error 12 from ...
when you have do long transactions or when mysqld has to examine
a lot of rows to calculate the query.
bdb_logdir
The value of the --bdb-logdir option.
bdb_shared_data
Is ON if you are using --bdb-shared-data.
bdb_tmpdir
The value of the --bdb-tmpdir option.
binlog_cache_size. The size of the cache to hold the SQL
statements for the binary log during a transaction. If you often use
big, multi-statement transactions you can increase this to get more
performance. See section 6.7.1 BEGIN/COMMIT/ROLLBACK Syntax.
bulk_insert_buffer_size (was myisam_bulk_insert_tree_size)
MyISAM uses special tree-like cache to make bulk inserts (that is,
INSERT ... SELECT, INSERT ... VALUES (...), (...), ..., and
LOAD DATA INFILE) faster. This variable limits
the size of the cache tree in bytes per thread. Setting it to 0
will disable this optimisation.
Note: this cache is only used when adding data to non-empty table.
Default value is 8 MB.
character_set
The default character set.
character_sets
The supported character sets.
concurrent_inserts
If ON (the default), MySQL will allow you to use INSERT on
MyISAM tables at the same time as you run SELECT queries
on them. You can turn this option off by starting mysqld with
--safe or --skip-new.
connect_timeout
The number of seconds the mysqld server is waiting for a connect
packet before responding with Bad handshake.
datadir
The value of the --datadir option.
delay_key_write
Option for MyISAM tables. Can have one of the following values:
| OFF | All CREATE TABLE ... DELAYED_KEY_WRITES are ignored. |
| ON | (default) MySQL will honor the DELAY_KEY_WRITE option
for CREATE TABLE.
|
| ALL | All new opened tables are treated as if they were created with the DELAY_KEY_WRITE option.
|
DELAY_KEY_WRITE is enabled this means that the key buffer for
tables with this option will not get flushed on every index update, but
only when a table is closed. This will speed up writes on keys a lot,
but you should add automatic checking of all tables with myisamchk
--fast --force if you use this.
delayed_insert_limit
After inserting delayed_insert_limit rows, the INSERT
DELAYED handler will check if there are any SELECT statements
pending. If so, it allows these to execute before continuing.
delayed_insert_timeout
How long a INSERT DELAYED thread should wait for INSERT
statements before terminating.
delayed_queue_size
What size queue (in rows) should be allocated for handling INSERT
DELAYED. If the queue becomes full, any client that does INSERT
DELAYED will wait until there is room in the queue again.
flush
This is ON if you have started MySQL with the --flush
option.
flush_time
If this is set to a non-zero value, then every flush_time seconds all
tables will be closed (to free up resources and sync things to disk). We
only recommend this option on Windows 9x/Me, or on systems where you have
very little resources.
ft_boolean_syntax
List of operators supported by MATCH ... AGAINST(... IN BOOLEAN MODE).
See section 6.8 MySQL Full-text Search.
ft_min_word_len
The minimum length of the word to be included in a FULLTEXT index.
Note: FULLTEXT indexes must be rebuilt after changing
this variable. (This option is new for MySQL 4.0.)
ft_max_word_len
The maximum length of the word to be included in a FULLTEXT index.
Note: FULLTEXT indexes must be rebuilt after changing
this variable. (This option is new for MySQL 4.0.)
ft_max_word_len_for_sort
The maximum length of the word in a FULLTEXT index
to be used in fast index recreation method in REPAIR,
CREATE INDEX, or ALTER TABLE. Longer words are inserted the
slow way. The rule of the thumb is as follows: with
ft_max_word_len_for_sort increasing, MySQL will create bigger
temporary files (thus slowing the process down, due to disk I/O), and will put
fewer keys in one sort block (again, decreasing the efficiency). When
ft_max_word_len_for_sort is too small, instead, MySQL will insert a
lot of words into index the slow way, but short words will be inserted very
quickly.
ft_stopword_file
The file to read the list of stopwords for fulltext search from.
All the words from the file will be used, comments are not honored.
By default, built-in list of stopwords is used
(as defined in `myisam/ft_static.c').
Setting this parameter to an empty string ("") will disable
stopword filtering.
Note: FULLTEXT indexes must be rebuilt after changing
this variable. (This option is new for MySQL 4.0.10)
have_innodb
YES if mysqld supports InnoDB tables. DISABLED
if --skip-innodb is used.
have_bdb
YES if mysqld supports Berkeley DB tables. DISABLED
if --skip-bdb is used.
have_raid
YES if mysqld supports the RAID option.
have_openssl
YES if mysqld supports SSL (encryption) on the client/server
protocol.
init_file
The name of the file specified with the --init-file option when
you start the server. This is a file of SQL statements you want the
server to execute when it starts.
interactive_timeout
The number of seconds the server waits for activity on an interactive
connection before closing it. An interactive client is defined as a
client that uses the CLIENT_INTERACTIVE option to
mysql_real_connect(). See also wait_timeout.
join_buffer_size
The size of the buffer that is used for full joins (joins that do not
use indexes). The buffer is allocated one time for each full join
between two tables. Increase this value to get a faster full join when
adding indexes is not possible. (Normally the best way to get fast joins
is to add indexes.)
key_buffer_size
Index blocks are buffered and are shared by all threads.
key_buffer_size is the size of the buffer used for index blocks.
Increase this to get better index handling (for all reads and multiple
writes) to as much as you can afford; 64M on a 256M machine that mainly
runs MySQL is quite common. If you, however, make this too big
(for instance more than 50% of your total memory) your system may start
to page and become extremely slow. Remember that because MySQL does not
cache data reads, you will have to leave some room for the OS
filesystem cache.
You can check the performance of the key buffer by doing SHOW
STATUS and examine the variables Key_read_requests,
Key_reads, Key_write_requests, and Key_writes. The
Key_reads/Key_read_request ratio should normally be < 0.01.
The Key_write/Key_write_requests is usually near 1 if you are
using mostly updates/deletes but may be much smaller if you tend to
do updates that affect many at the same time or if you are
using DELAY_KEY_WRITE. See section 4.5.7 SHOW Syntax.
To get even more speed when writing many rows at the same time, use
LOCK TABLES. See section 6.7.2 LOCK TABLES/UNLOCK TABLES Syntax.
language
The language used for error messages.
large_file_support
If mysqld was compiled with options for big file support.
locked_in_memory
If mysqld was locked in memory with --memlock
log
If logging of all queries is enabled.
log_update
If the update log is enabled.
log_bin
If the binary log is enabled.
log_slave_updates
If the updates from the slave should be logged.
long_query_time
If a query takes longer than this (in seconds), the Slow_queries counter
will be incremented. If you are using --log-slow-queries, the query
will be logged to the slow query logfile. This value is measured in real
time, not CPU time, so a query that may be under the threshold on a lightly
loaded system may be above the threshold on a heavily loaded one.
See section 4.9.5 The Slow Query Log.
lower_case_table_names
If set to 1 table names are stored in lowercase on disk and table
name comparisons will be case-insensitive.
From version 4.0.2, this option also applies to database names.
From 4.1.1 this option also applies to table alias.
See section 6.1.3 Case Sensitivity in Names.
max_allowed_packet
The maximum size of one packet. The message buffer is initialised to
net_buffer_length bytes, but can grow up to max_allowed_packet
bytes when needed. This value by default is small, to catch big (possibly
wrong) packets. You must increase this value if you are using big
BLOB columns. It should be as big as the biggest BLOB you want
to use. The protocol limits for max_allowed_packet is 16M in MySQL
3.23 and 1G in MySQL 4.0.
max_binlog_cache_size
If a multi-statement transaction requires more than this amount of memory,
one will get the error "Multi-statement transaction required more than
'max_binlog_cache_size' bytes of storage".
max_binlog_size
Available after 3.23.33. If a write to the binary (replication) log exceeds
the given value, rotate the logs. You cannot set it to less than 1024 bytes,
or more than 1 GB. Default is 1 GB. Note if you are using
transactions: a transaction is written in one chunk to the binary log,
hence it is never split between several binary logs. Therefore, if you
have big transactions, you may see binlogs bigger than max_binlog_size.
max_connections
The number of simultaneous clients allowed. Increasing this value increases
the number of file descriptors that mysqld requires. See below for
comments on file descriptor limits. See section A.2.5 Too many connections Error.
max_connect_errors
If there is more than this number of interrupted connections from a host
this host will be blocked from further connections. You can unblock a host
with the command FLUSH HOSTS.
max_delayed_threads
Don't start more than this number of threads to handle INSERT DELAYED
statements. If you try to insert data into a new table after all INSERT
DELAYED threads are in use, the row will be inserted as if the
DELAYED attribute wasn't specified. If you set this to 0, MySQL
will never create a max_delayed thread.
max_heap_table_size
Don't allow creation of heap tables bigger than this.
max_join_size
Joins that are probably going to read more than max_join_size
records return an error. Set this value if your users tend to perform joins
that lack a WHERE clause, that take a long time, and that return
millions of rows.
max_sort_length
The number of bytes to use when sorting BLOB or TEXT
values (only the first max_sort_length bytes of each value
are used; the rest are ignored).
max_user_connections
The maximum number of active connections for a single user (0 = no limit).
max_tmp_tables
(This option doesn't yet do anything.)
Maximum number of temporary tables a client can keep open at the same time.
max_write_lock_count
After this many write locks, allow some read locks to run in between.
myisam_recover_options
The value of the --myisam-recover option.
myisam_sort_buffer_size
The buffer that is allocated when sorting the index when doing a
REPAIR or when creating indexes with CREATE INDEX or
ALTER TABLE.
myisam_max_extra_sort_file_size.
If the temporary file used for fast index creation would be bigger than
using the key cache by the amount specified here, then prefer the key
cache method. This is mainly used to force long character keys in large
tables to use the slower key cache method to create the index.
Note that this parameter is given in megabytes before 4.0.3 and
in bytes beginning with this version.
myisam_repair_threads.
If this value is greater than one, MyISAM table indexes during
Repair by sorting process will be created in parallel -
each index in its own thread. Note: multi-threaded repair
is still alpha quality code.
myisam_max_sort_file_size
The maximum size of the temporary file MySQL is allowed to use
while recreating the index (during REPAIR, ALTER TABLE
or LOAD DATA INFILE. If the file-size would be bigger than this,
the index will be created through the key cache (which is slower).
Note that this parameter is given in megabytes before 4.0.3 and
in bytes beginning with this version.
net_buffer_length
The communication buffer is reset to this size between queries. This
should not normally be changed, but if you have very little memory, you
can set it to the expected size of a query. (That is, the expected length of
SQL statements sent by clients. If statements exceed this length, the buffer
is automatically enlarged, up to max_allowed_packet bytes.)
net_read_timeout
Number of seconds to wait for more data from a connection before aborting
the read. Note that when we don't expect data from a connection, the timeout
is defined by write_timeout. See also slave_net_timeout.
net_retry_count
If a read on a communication port is interrupted, retry this many times
before giving up. This value should be quite high on FreeBSD as
internal interrupts are sent to all threads.
net_write_timeout
Number of seconds to wait for a block to be written to a connection before
aborting the write.
open_files_limit
If this is not 0, then mysqld will use this value to reserve file
descriptors to use with setrlimit(). If this value is 0 then
mysqld will reserve max_connections*5 or
max_connections + table_cache*2 (whichever is larger) number of
files. You should try increasing this if mysqld gives you the
error 'Too many open files'.
pid_file
The value of the --pid-file option.
port
The value of the --port option.
protocol_version
The protocol version used by the MySQL server.
read_buffer_size (was record_buffer)
Each thread that does a sequential scan allocates a buffer of this
size for each table it scans. If you do many sequential scans, you may
want to increase this value.
record_rnd_buffer_size
When reading rows in sorted order after a sort, the rows are read
through this buffer to avoid a disk seeks. Can improve ORDER BY
by a lot if set to a high value. As this is a thread-specific variable,
one should not set this big globally, but just change this when running
some specific big queries.
query_cache_limit
Don't cache results that are bigger than this. (Default 1M).
query_cache_size
The memory allocated to store results from old queries.
If this is 0, the query cache is disabled (default).
query_cache_type
This may be set (only numeric) to
| Value | Alias | Comment |
| 0 | OFF | Don't cache or retrieve results. |
| 1 | ON | Cache all results except SELECT SQL_NO_CACHE ... queries.
|
| 2 | DEMAND | Cache only SELECT SQL_CACHE ... queries.
|
safe_show_database
Don't show databases for which the user doesn't have any database or
table privileges. This can improve security if you're concerned about
people being able to see what databases other users have. See also
skip_show_database.
server_id
The value of the --server-id option.
skip_locking
Is OFF if mysqld uses external locking.
skip_networking
Is ON if we only allow local (socket) connections.
skip_show_database
This prevents people from doing SHOW DATABASES if they don't have
the PROCESS privilege. This can improve security if you're
concerned about people being able to see what databases other users
have. See also safe_show_database.
slave_net_timeout
Number of seconds to wait for more data from a master/slave connection
before aborting the read.
slow_launch_time
If creating the thread takes longer than this value (in seconds), the
Slow_launch_threads counter will be incremented.
socket
The Unix socket used by the server.
sort_buffer_size
Each thread that needs to do a sort allocates a buffer of this
size. Increase this value for faster ORDER BY or GROUP BY
operations.
See section A.4.4 Where MySQL Stores Temporary Files.
table_cache
The number of open tables for all threads. Increasing this value
increases the number of file descriptors that mysqld requires.
You can check if you need to increase the table cache by checking the
Opened_tables variable.
See section 4.5.7.3 SHOW STATUS.
If this variable
is big and you don't do FLUSH TABLES a lot (which just forces all
tables to be closed and reopenend), then you should increase the value of this
variable.
For more information about the table cache, see section 5.4.7 How MySQL Opens and Closes Tables.
table_type
The default table type.
thread_cache_size
How many threads we should keep in a cache for reuse. When a
client disconnects, the client's threads are put in the cache if there
aren't more than thread_cache_size threads from before. All new
threads are first taken from the cache, and only when the cache is empty
is a new thread created. This variable can be increased to improve
performance if you have a lot of new connections. (Normally this doesn't
give a notable performance improvement if you have a good
thread implementation.) By examing the difference between
the Connections and Threads_created status variables
(see section 4.5.7.3 SHOW STATUS for details) you can see how efficient
thread cache is.
thread_concurrency
On Solaris, mysqld will call thr_setconcurrency() with
this value. thr_setconcurrency() permits the application to give
the threads system a hint for the desired number of threads that should
be run at the same time.
thread_stack
The stack size for each thread. Many of the limits detected by the
crash-me test are dependent on this value. The default is
large enough for normal operation. See section 5.1.4 The MySQL Benchmark Suite.
timezone
The timezone for the server.
tmp_table_size
If an in-memory temporary table exceeds this size, MySQL
will automatically convert it to an on-disk MyISAM table.
Increase the value of tmp_table_size if you do many advanced
GROUP BY queries and you have lots of memory.
tmpdir
The directory used for temporary files and temporary tables.
Starting from MySQL 4.1, it can be set to a list of paths
separated by colon : (semicolon ; on Windows). They
will be used in round-robin fashion. This feature can be used to
spread load between several physical disks.
version
The version number for the server.
wait_timeout
The number of seconds the server waits for activity on a not interactive
connection before closing it.
On thread startup SESSION.WAIT_TIMEOUT is initialised from
GLOBAL.WAIT_TIMEOUT or GLOBAL.INTERACTIVE_TIMEOUT depending
on the type of client (as defined by the CLIENT_INTERACTIVE connect
option). See also interactive_timeout.
The manual section that describes tuning MySQL contains some information of how to tune the above variables. See section 5.5.2 Tuning Server Parameters.
SHOW LOGS
SHOW LOGS shows you status information about existing log
files. It currently only displays information about Berkeley DB log
files.
File shows the full path to the log file
Type shows the type of the log file (BDB for Berkeley
DB log files)
Status shows the status of the log file (FREE if the
file can be removed, or IN USE if the file is needed by the transaction
subsystem)
SHOW PROCESSLIST
SHOW [FULL] PROCESSLIST shows you which threads are running.
You can also get this information using the mysqladmin processlist
command. If you have the SUPER privilege, you can see all
threads. Otherwise, you can see only your own threads.
See section 4.5.6 KILL Syntax.
If you don't use the FULL option, then only the first 100
characters of each query will be shown.
Starting from 4.0.12, MySQL reports the hostname for TCP/IP connections
as hostname:client_port to make it easier to find out which client
is doing what.
This command is very useful if you get the 'too many connections' error
message and want to find out what's going on. MySQL reserves
one extra connection for a client with the SUPER privilege
to ensure that you should always be able to login and check the system
(assuming you are not giving this privilege to all your users).
Some states commonly seen in mysqladmin processlist
Checking table
The thread is performing [automatic] checking of the table.
Closing tables
Means that the thread is flushing the changed table data to disk and
closing the used tables. This should be a fast operation. If not, then
you should check that you don't have a full disk or that the disk is not
in very heavy use.
Connect Out
Slave connecting to master.
Copying to tmp table on disk
The temporary result set was larger than tmp_table_size and the
thread is now changing the in memory-based temporary table to a disk
based one to save memory.
Creating tmp table
The thread is creating a temporary table to hold a part of the result for
the query.
deleting from main table
When executing the first part of a multi-table delete and we are only
deleting from the first table.
deleting from reference tables
When executing the second part of a multi-table delete and we are deleting
the matched rows from the other tables.
Flushing tables
The thread is executing FLUSH TABLES and is waiting for all
threads to close their tables.
Killed
Someone has sent a kill to the thread and it should abort next time it
checks the kill flag. The flag is checked in each major loop in MySQL,
but in some cases it may still take a short time for the thread to die.
If the thread is locked by some other thread, the kill will take affect
as soon as the other thread releases it's lock.
Sending data
The thread is processing rows for a SELECT statement and is
also sending data to the client.
Sorting for group
The thread is doing a sort to satsify a GROUP BY.
Sorting for order
The thread is doing a sort to satsify a ORDER BY.
Opening tables
This simply means that the thread is trying to open a table. This is
should be very fast procedure, unless something prevents opening. For
example an ALTER TABLE or a LOCK TABLE can prevent opening
a table until the command is finished.
Removing duplicates
The query was using SELECT DISTINCT in such a way that MySQL
couldn't optimise that distinct away at an early stage. Because of this
MySQL has to do an extra stage to remove all duplicated rows before
sending the result to the client.
Reopen table
The thread got a lock for the table, but noticed after getting the lock
that the underlying table structure changed. It has freed the lock,
closed the table and is now trying to reopen it.
Repair by sorting
The repair code is using sorting to create indexes.
Repair with keycache
The repair code is using creating keys one by one through the key cache.
This is much slower than Repair by sorting.
Searching rows for update
The thread is doing a first phase to find all matching rows before
updating them. This has to be done if the UPDATE is changing
the index that is used to find the involved rows.
Sleeping
The thread is wating for the client to send a new command to it.
System lock
The thread is waiting for getting to get a external system lock for the
table. If you are not using multiple mysqld servers that are accessing
the same tables, you can disable system locks with the
--skip-external-locking option.
Upgrading lock
The INSERT DELAYED handler is trying to get a lock for the table
to insert rows.
Updating
The thread is searching for rows to update and updating them.
User Lock
The thread is waiting on a GET_LOCK().
Waiting for tables
The thread got a notification that the underlying structure for a table
has changed and it needs to reopen the table to get the new structure.
To be able to reopen the table it must however wait until all other
threads have closed the table in question.
This notification happens if another thread has used FLUSH TABLES
or one of the following commands on the table in question: FLUSH
TABLES table_name, ALTER TABLE, RENAME TABLE,
REPAIR TABLE, ANALYZE TABLE or OPTIMIZE TABLE.
waiting for handler insert
The INSERT DELAYED handler has processed all inserts and are
waiting to get new ones.
Most states are very quick operations. If threads last in any of these states for many seconds, there may be a problem around that needs to be investigated.
There are some other states that are not mentioned previously, but most of
these are only useful to find bugs in mysqld.
SHOW GRANTS
SHOW GRANTS FOR user lists the grant commands that must be issued to
duplicate the grants for a user.
mysql> SHOW GRANTS FOR root@localhost; +---------------------------------------------------------------------+ | Grants for root@localhost | +---------------------------------------------------------------------+ | GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' WITH GRANT OPTION | +---------------------------------------------------------------------+
To list grants for the current session one may use CURRENT_USER()
function (new in version 4.0.6) to find out what user the session
was authentificated as.
See section 6.3.6.2 Miscellaneous Functions.
SHOW CREATE TABLE
Shows a CREATE TABLE statement that will create the given table:
mysql> SHOW CREATE TABLE t\G
*************************** 1. row ***************************
Table: t
Create Table: CREATE TABLE t (
id INT(11) default NULL auto_increment,
s char(60) default NULL,
PRIMARY KEY (id)
) TYPE=MyISAM
SHOW CREATE TABLE will quote table and column names according to
SQL_QUOTE_SHOW_CREATE option.
section 5.5.6 SET Syntax.
SHOW WARNINGS | ERRORSSHOW WARNINGS [LIMIT #] SHOW ERRORS [LIMIT #]
This command is implemented in MySQL 4.1.0.
It shows the errors, warnings and notes that one got for the last command. The errors/warnings are reset for each new command that uses a table.
The MySQL server sends back the total number of warnings and errors you
got for the last commend; This can be retrieved by calling
mysql_warning_count().
Up to max_error_count messages are stored (Global and thread
specific variable).
You can retrieve the number of errors from @error_count and
warnings from @warning_count.
SHOW WARNINGS shows all errors, warnings and notes you got for
the last command while SHOW ERRORS only shows you the errors.
mysql> DROP TABLE IF EXISTS no_such_table; mysql> SHOW WARNINGS; +-------+------+-------------------------------+ | Level | Code | Message | +-------+------+-------------------------------+ | Note | 1051 | Unknown table 'no_such_table' | +-------+------+-------------------------------+
Note that in MySQL 4.1.0 we have just added the frame work for warnings
and not many MySQL command do yet generate warnings. 4.1.1 supports all
kind of warnings for LOAD DATA INFILE and DML statements such as
INSERT, UPDATE and ALTER commands.
For example, here is a simple case which produces conversion warnings for a insert statement.
mysql> create table t1(a tinyint NOT NULL, b char(4)); Query OK, 0 rows affected (0.00 sec) mysql> insert into t1 values(10,'mysql'),(NULL,'test'),(300,'open source'); Query OK, 3 rows affected, 4 warnings (0.15 sec) Records: 3 Duplicates: 0 Warnings: 4 mysql> show warnings; +---------+------+---------------------------------------------------------------+ | Level | Code | Message | +---------+------+---------------------------------------------------------------+ | Warning | 1263 | Data truncated for column 'b' at row 1 | | Warning | 1261 | Data truncated, NULL supplied to NOT NULL column 'a' at row 2 | | Warning | 1262 | Data truncated, out of range for column 'a' at row 3 | | Warning | 1263 | Data truncated for column 'b' at row 3 | +---------+------+---------------------------------------------------------------+ 4 rows in set (0.00 sec)
Maximum number of warnings can be specified using the server variable
'max_error_count', SET max_error_count=[count]; By default
it is 64. In case to disable warnings, simply reset this variable to
'0'. In case if max_error_count is 0, then still the warning
count represents how many warnings has occured, but none of the messages
are stored.
For example, consider the following ALTER table statement for the
above example, which returns only one warning message even though total
warnings occured is 3 when you set max_error_count=1.
mysql> show variables like 'max_error_count'; +-----------------+-------+ | Variable_name | Value | +-----------------+-------+ | max_error_count | 64 | +-----------------+-------+ 1 row in set (0.00 sec) mysql> set max_error_count=1; Query OK, 0 rows affected (0.00 sec) mysql> alter table t1 modify b char; Query OK, 3 rows affected, 3 warnings (0.00 sec) Records: 3 Duplicates: 0 Warnings: 3 mysql> show warnings; +---------+------+----------------------------------------+ | Level | Code | Message | +---------+------+----------------------------------------+ | Warning | 1263 | Data truncated for column 'b' at row 1 | +---------+------+----------------------------------------+ 1 row in set (0.00 sec) mysql>
SHOW TABLE TYPESSHOW TABLE TYPES
This command is implemented in MySQL 4.1.0.
SHOW TABLE TYPES shows you status information about the table
types. This is particulary useful for checking if a table type is supported;
or to see what is the default table type is.
mysql> SHOW TABLE TYPES; +--------+---------+-----------------------------------------------------------+ | Type | Support | Comment | +--------+---------+-----------------------------------------------------------+ | MyISAM | DEFAULT | Default type from 3.23 with great performance | | HEAP | YES | Hash based, stored in memory, useful for temporary tables | | MERGE | YES | Collection of identical MyISAM tables | | ISAM | YES | Obsolete table type; Is replaced by MyISAM | | InnoDB | YES | Supports transactions, row-level locking and foreign keys | | BDB | NO | Supports transactions and page-level locking | +--------+---------+-----------------------------------------------------------+ 6 rows in set (0.00 sec)
The 'Support' option DEFAULT indicates whether the perticular table
type is supported, and which is the default type. If the server is started with
--default-table-type=InnoDB, then the InnoDB 'Support' field will
have the value DEFAULT.
SHOW PRIVILEGESSHOW PRIVILEGES
This command is implemented in MySQL 4.1.0.
SHOW PRIVILEGES shows the list of system privileges that the underlying
MySQL server supports.
mysql> show privileges; +------------+--------------------------+-------------------------------------------------------+ | Privilege | Context | Comment | +------------+--------------------------+-------------------------------------------------------+ | Select | Tables | To retrieve rows from table | | Insert | Tables | To insert data into tables | | Update | Tables | To update existing rows | | Delete | Tables | To delete existing rows | | Index | Tables | To create or drop indexes | | Alter | Tables | To alter the table | | Create | Databases,Tables,Indexes | To create new databases and tables | | Drop | Databases,Tables | To drop databases and tables | | Grant | Databases,Tables | To give to other users those privileges you possess | | References | Databases,Tables | To have references on tables | | Reload | Server Admin | To reload or refresh tables, logs and privileges | | Shutdown | Server Admin | To shutdown the server | | Process | Server Admin | To view the plain text of currently executing queries | | File | File access on server | To read and write files on the server | +------------+--------------------------+-------------------------------------------------------+ 14 rows in set (0.00 sec)
By default, MySQL uses the ISO-8859-1 (Latin1) character set with sorting according to Swedish/Finnish. This is the character set suitable in the USA and western Europe.
All standard MySQL binaries are compiled with
--with-extra-charsets=complex. This will add code to all
standard programs to be able to handle latin1 and all multi-byte
character sets within the binary. Other character sets will be
loaded from a character-set definition file when needed.
The character set determines what characters are allowed in names and how
things are sorted by the ORDER BY and GROUP BY clauses of
the SELECT statement.
You can change the character set with the --default-character-set
option when you start the server. The character sets available depend
on the --with-charset=charset and --with-extra-charsets=
list-of-charset | complex | all options to configure, and the
character set configuration files listed in
`SHAREDIR/charsets/Index'. See section 2.3.3 Typical configure Options.
If you change the character set when running MySQL (which may
also change the sort order), you must run myisamchk -r -q
--set-character-set=charset on all
tables. Otherwise, your indexes may not be ordered correctly.
When a client connects to a MySQL server, the server sends the default character set in use to the client. The client will switch to use this character set for this connection.
One should use mysql_real_escape_string() when escaping strings
for a SQL query. mysql_real_escape_string() is identical to the
old mysql_escape_string() function, except that it takes the MYSQL
connection handle as the first parameter.
If the client is compiled with different paths than where the server is installed and the user who configured MySQL didn't include all character sets in the MySQL binary, one must specify for the client where it can find the additional character sets it will need if the server runs with a different character set than the client.
One can specify this by putting in a MySQL option file:
[client] character-sets-dir=/usr/local/mysql/share/mysql/charsets
where the path points to the directory in which the dynamic MySQL character sets are stored.
One can force the client to use specific character set by specifying:
[client] default-character-set=character-set-name
but normally this is never needed.
To get German sorting order, you should start mysqld with
--default-character-set=latin1_de. This will give you the following
characteristics.
When sorting and comparing strings, the following mapping is done on the strings before doing the comparison:
ä -> ae ö -> oe ü -> ue ß -> ss
All accented characters, are converted to their un-accented uppercase counterpart. All letters are converted to uppercase.
When comparing strings with LIKE the one -> two character mapping
is not done. All letters are converted to uppercase. Accent are removed
from all letters except: Ü, ü, Ö, ö,
Ä and ä.
mysqld can issue error messages in the following languages:
Czech, Danish, Dutch, English (the default), Estonian, French, German, Greek,
Hungarian, Italian, Japanese, Korean, Norwegian, Norwegian-ny, Polish,
Portuguese, Romanian, Russian, Slovak, Spanish, and Swedish.
To start mysqld with a particular language, use either the
--language=lang or -L lang options. For example:
shell> mysqld --language=swedish
or:
shell> mysqld --language=/usr/local/share/swedish
Note that all language names are specified in lowercase.
The language files are located (by default) in `mysql_base_dir/share/LANGUAGE/'.
To update the error message file, you should edit the `errmsg.txt' file and execute the following command to generate the `errmsg.sys' file:
shell> comp_err errmsg.txt errmsg.sys
If you upgrade to a newer version of MySQL, remember to repeat your changes with the new `errmsg.txt' file.
To add another character set to MySQL, use the following procedure.
Decide if the set is simple or complex. If the character set does not need to use special string collating routines for sorting and does not need multi-byte character support, it is simple. If it needs either of those features, it is complex.
For example, latin1 and danish are simple charactersets while
big5 or czech are complex character sets.
In the following section, we have assumed that you name your character
set MYSET.
For a simple character set do the following:
ctype array takes up the first 257 words. The
to_lower[], to_upper[] and sort_order[] arrays take up
256 words each after that.
CHARSETS_AVAILABLE and
COMPILED_CHARSETS lists in configure.in.
For a complex character set do the following:
ctype_MYSET,
to_lower_MYSET, and so on. This corresponds to the arrays
in the simple character set. See section 4.6.4 The Character Definition Arrays.
/* * This comment is parsed by configure to create ctype.c, * so don't change it unless you know what you are doing. * * .configure. number_MYSET=MYNUMBER * .configure. strxfrm_multiply_MYSET=N * .configure. mbmaxlen_MYSET=N */The
configure program uses this comment to include
the character set into the MySQL library automatically.
The strxfrm_multiply and mbmaxlen lines will be explained in
the following sections. Only include these if you need the string
collating functions or the multi-byte character set functions,
respectively.
my_strncoll_MYSET()
my_strcoll_MYSET()
my_strxfrm_MYSET()
my_like_range_MYSET()
CHARSETS_AVAILABLE and
COMPILED_CHARSETS lists in configure.in.
The file `sql/share/charsets/README' includes some more instructions.
If you want to have the character set included in the MySQL distribution, mail a patch to internals@lists.mysql.com.
to_lower[] and to_upper[] are simple arrays that hold the
lowercase and uppercase characters corresponding to each member of the
character set. For example:
to_lower['A'] should contain 'a' to_upper['a'] should contain 'A'
sort_order[] is a map indicating how characters should be ordered for
comparison and sorting purposes. Quite often (but not for all character sets)
this is the same as to_upper[] (which means sorting will be
case-insensitive). MySQL will sort characters based on the value of
sort_order[character]. For more complicated sorting rules, see
the discussion of string collating below. See section 4.6.5 String Collating Support.
ctype[] is an array of bit values, with one element for one character.
(Note that to_lower[], to_upper[], and sort_order[]
are indexed by character value, but ctype[] is indexed by character
value + 1. This is an old legacy to be able to handle EOF.)
You can find the following bitmask definitions in `m_ctype.h':
#define _U 01 /* Uppercase */ #define _L 02 /* Lowercase */ #define _N 04 /* Numeral (digit) */ #define _S 010 /* Spacing character */ #define _P 020 /* Punctuation */ #define _C 040 /* Control character */ #define _B 0100 /* Blank */ #define _X 0200 /* heXadecimal digit */
The ctype[] entry for each character should be the union of the
applicable bitmask values that describe the character. For example,
'A' is an uppercase character (_U) as well as a
hexadecimal digit (_X), so ctype['A'+1] should contain the
value:
_U + _X = 01 + 0200 = 0201
If the sorting rules for your language are too complex to be handled
with the simple sort_order[] table, you need to use the string
collating functions.
Right now the best documentation on this is the character sets that are
already implemented. Look at the big5, czech, gbk,
sjis, and tis160 character sets for examples.
You must specify the strxfrm_multiply_MYSET=N value in the
special comment at the top of the file. N should be set to
the maximum ratio the strings may grow during my_strxfrm_MYSET (it
must be a positive integer).
If your want to add support for a new character set that includes multi-byte characters, you need to use the multi-byte character functions.
Right now the best documentation on this is the character sets that are
already implemented. Look at the euc_kr, gb2312,
gbk, sjis, and ujis character sets for
examples. These are implemented in the `ctype-'charset'.c' files
in the `strings' directory.
You must specify the mbmaxlen_MYSET=N value in the special
comment at the top of the source file. N should be set to the
size in bytes of the largest character in the set.
If you try to use a character set that is not compiled into your binary, you can run into a couple of different problems:
--character-sets-dir
option to the program in question.
ERROR 1105: File '/usr/local/share/mysql/charsets/?.conf' not found (Errcode: 2)In this case you should either get a new
Index file or add
by hand the name of any missing character sets.
For MyISAM tables, you can check the character set name and number for a
table with myisamchk -dvv table_name.
All MySQL programs take many different options. However, every
MySQL program provides a --help option that you can use
to get a full description of the program's different options. For example, try
mysql --help.
You can override default options for all standard programs with an option file. section 4.1.2 `my.cnf' Option Files.
The following list briefly describes the server-side MySQL programs:
myisamchk
myisamchk has many functions, it is described in its own
chapter. See section 4 Database Administration.
make_binary_distribution
support.mysql.com for the
convenience of other MySQL users.
mysqlbug
mysqld
mysql_install_db
safe_mysqld, The Wrapper Around mysqld
Note that in MySQL 4.0 safe_mysqld was renamed to mysqld_safe.
safe_mysqld is the recommended way to start a mysqld
daemon on Unix. safe_mysqld adds some safety features such as
restarting the server when an error occurs and logging run-time
information to a log file.
If you don't use --mysqld=# or --mysqld-version=#
safe_mysqld will use an executable named mysqld-max if it
exists. If not, safe_mysqld will start mysqld.
This makes it very easy to test to use mysqld-max instead of
mysqld; just copy mysqld-max to where you have
mysqld and it will be used.
Normally one should never edit the safe_mysqld script, but
instead put the options to safe_mysqld in the
[safe_mysqld] section in the `my.cnf'
file. safe_mysqld will read all options from the [mysqld],
[server] and [safe_mysqld] sections from the option files.
See section 4.1.2 `my.cnf' Option Files.
Note that all options on the command-line to safe_mysqld are passed
to mysqld. If you wants to use any options in safe_mysqld that
mysqld doesn't support, you must specify these in the option file.
Most of the options to safe_mysqld are the same as the options to
mysqld. See section 4.1.1 mysqld Command-line Options.
safe_mysqld supports the following options:
--basedir=path
--core-file-size=#
mysqld should be able to create. Passed to ulimit -c.
--datadir=path
--defaults-extra-file=path
--defaults-file=path
--err-log=path (this is marked obsolete in 4.0; Use --log-error instead)
--log-error=path
--ledir=path
mysqld
--log=path
--mysqld=mysqld-version
mysqld version in the ledir directory you want to start.
--mysqld-version=version
--mysqld= but here you only give the suffix for mysqld.
For example if you use --mysqld-version=max, safe_mysqld will
start the ledir/mysqld-max version. If the argument to
--mysqld-version is empty, ledir/mysqld will be used.
--no-defaults
--open-files-limit=#
mysqld should be able to open. Passed to ulimit -n. Note that you need to start safe_mysqld as root for this to work properly!
--pid-file=path
--port=#
--socket=path
--timezone=#
TZ) variable to the value of this parameter.
--user=#
The safe_mysqld script is written so that it normally is able to start
a server that was installed from either a source or a binary version of
MySQL, even if these install the server in slightly different
locations. safe_mysqld expects one of these conditions to be true:
safe_mysqld is invoked. safe_mysqld looks under its working
directory for `bin' and `data' directories (for binary
distributions) or for `libexec' and `var' directories (for source
distributions). This condition should be met if you execute
safe_mysqld from your MySQL installation directory (for
example, `/usr/local/mysql' for a binary distribution).
safe_mysqld attempts to locate them by absolute pathnames. Typical
locations are `/usr/local/libexec' and `/usr/local/var'.
The actual locations are determined when the distribution was built from which
safe_mysqld comes. They should be correct if
MySQL was installed in a standard location.
Because safe_mysqld will try to find the server and databases relative
to its own working directory, you can install a binary distribution of
MySQL anywhere, as long as you start safe_mysqld from the
MySQL installation directory:
shell> cd mysql_installation_directory shell> bin/safe_mysqld &
If safe_mysqld fails, even when invoked from the MySQL
installation directory, you can modify it to use the path to mysqld
and the pathname options that are correct for your system. Note that if you
upgrade MySQL in the future, your modified version of
safe_mysqld will be overwritten, so you should make a copy of your
edited version that you can reinstall.
mysqld_multi, A Program for Managing Multiple MySQL Servers
mysqld_multi is meant for managing several mysqld
processes that listen for connections on different Unix sockets and
TCP/IP ports.
The program will search for group(s) named [mysqld#] from
`my.cnf' (or the file named by the --config-file=... option),
where # can be any positive number starting from 1. This number
is referred to in the following discussion as the option group number,
or GNR. Group numbers distinquish option groups from one another and are
used as arguments to mysqld_multi to specify which servers you want
to start, stop, or obtain status for. Options listed in these groups
should be the same as you would use in the usual [mysqld]
group used for starting mysqld. (See, for example, section 2.4.3 Starting and Stopping MySQL Automatically.) However, for mysqld_multi, be sure that each group
includes options for values such as the port, socket, etc., to be used
for each individual mysqld process.
mysqld_multi is invoked using the following syntax:
Usage: mysqld_multi [OPTIONS] {start|stop|report} [GNR,GNR,GNR...]
or mysqld_multi [OPTIONS] {start|stop|report} [GNR-GNR,GNR,GNR-GNR,...]
Each GNR represents an option group number. You can start, stop or report any GNR, or several of them at the same time. For an example of how you might set up an option file, use this command:
shell> mysqld_multi --example
The GNR values in the list can be comma-separated or combined with a dash; in the latter case, all the GNRs between GNR1-GNR2 will be affected. With no GNR argument, all groups listed in the option file will be either started, stopped, or reported. Note that you must not have any white spaces in the GNR list. Anything after a white space is ignored.
mysqld_multi supports the following options:
--config-file=...
[mysqld_multi]), but only groups
[mysqld#]. Without this option, everything will be searched from the
ordinary `my.cnf' file.
--example
--help
--log=...
--mysqladmin=...
mysqladmin binary to be used for a server shutdown.
--mysqld=...
mysqld binary to be used. Note that you can give
safe_mysqld to this option also. The options are passed to
mysqld. Just make sure you have mysqld in your environment
variable PATH or fix safe_mysqld.
--no-log
--password=...
mysqladmin.
--tcp-ip
--user=...
mysqladmin.
--version
Some notes about mysqld_multi:
mysqld services (e.g using the mysqladmin program) have the same
password and username for all the data directories accessed (to the
mysql database) And make sure that the user has the SHUTDOWN
privilege! If you have many data directories and many different mysql
databases with different passwords for the MySQL root user,
you may want to create a common multi_admin user for each using the
same password (see below). Example how to do it:
shell> mysql -u root -S /tmp/mysql.sock -proot_password -e "GRANT SHUTDOWN ON *.* TO multi_admin@localhost IDENTIFIED BY 'multipass'"See section 4.2.6 How the Privilege System Works. You will have to do the above for each
mysqld running in each
data directory, that you have (just change the socket, -S=...).
pid-file is very important, if you are using safe_mysqld
to start mysqld (e.g., --mysqld=safe_mysqld) Every
mysqld should have its own pid-file. The advantage
using safe_mysqld instead of mysqld directly here is,
that safe_mysqld ``guards'' every mysqld process and will
restart it, if a mysqld process terminates due to a signal
sent using kill -9, or for other reasons such as a segmentation
fault (which MySQL should never do, of course ;). Please note that the
safe_mysqld script may require that you start it from a certain
place. This means that you may have to cd to a certain directory,
before you start the mysqld_multi. If you have problems starting,
please see the safe_mysqld script. Check especially the lines:
-------------------------------------------------------------------------- MY_PWD=`pwd` Check if we are starting this relative (for the binary release) if test -d /data/mysql -a -f ./share/mysql/english/errmsg.sys -a -x ./bin/mysqld --------------------------------------------------------------------------See section 4.7.2
safe_mysqld, The Wrapper Around mysqld.
The above test should be successful, or you may encounter problems.
mysqlds in the same data
directory. Use separate data directories, unless you know what
you are doing!
mysqld.
mysqld group were intentionally left out from
the example. You may have 'gaps' in the config file. This gives you
more flexibility. The order in which the mysqlds are started or
stopped depends on the order in which they appear in the config file.
[mysqld17] is 17.
--user for mysqld, but in order to
do this you need to run the mysqld_multi script as the Unix root
user. Having the option in the config file doesn't matter; you will
just get a warning, if you are not the superuser and the mysqlds
are started under your Unix account. Important: Make
sure that the pid-file and the data directory are
read+write(+execute for the latter one) accessible for that
Unix user, who the specific mysqld process is started
as. Do not use the Unix root account for this, unless you
know what you are doing!
mysqlds and why one
would want to have separate mysqld processes. Starting multiple
mysqlds in one data directory will not give you extra
performance in a threaded system!
See section 4.1.4 Running Multiple MySQL Servers on the Same Machine.
This is an example of the config file on behalf of mysqld_multi.
# This file should probably be in your home dir (~/.my.cnf) or /etc/my.cnf # Version 2.1 by Jani Tolonen [mysqld_multi] mysqld = /usr/local/bin/safe_mysqld mysqladmin = /usr/local/bin/mysqladmin user = multi_admin password = multipass [mysqld2] socket = /tmp/mysql.sock2 port = 3307 pid-file = /usr/local/mysql/var2/hostname.pid2 datadir = /usr/local/mysql/var2 language = /usr/local/share/mysql/english user = john [mysqld3] socket = /tmp/mysql.sock3 port = 3308 pid-file = /usr/local/mysql/var3/hostname.pid3 datadir = /usr/local/mysql/var3 language = /usr/local/share/mysql/swedish user = monty [mysqld4] socket = /tmp/mysql.sock4 port = 3309 pid-file = /usr/local/mysql/var4/hostname.pid4 datadir = /usr/local/mysql/var4 language = /usr/local/share/mysql/estonia user = tonu [mysqld6] socket = /tmp/mysql.sock6 port = 3311 pid-file = /usr/local/mysql/var6/hostname.pid6 datadir = /usr/local/mysql/var6 language = /usr/local/share/mysql/japanese user = jani
See section 4.1.2 `my.cnf' Option Files.
myisampack, The MySQL Compressed Read-only Table Generator
myisampack is used to compress MyISAM tables, and pack_isam
is used to compress ISAM tables. Because ISAM tables are deprecated, we
will only discuss myisampack here, but everything said about
myisampack should also be true for pack_isam.
myisampack works by compressing each column in the table separately.
The information needed to decompress columns is read into memory when the
table is opened. This results in much better performance when accessing
individual records, because you only have to uncompress exactly one record, not
a much larger disk block as when using Stacker on MS-DOS.
Usually, myisampack packs the datafile 40%-70%.
MySQL uses memory mapping (mmap()) on compressed tables and
falls back to normal read/write file usage if mmap() doesn't work.
Please note the following:
myisampack can also pack BLOB or TEXT columns.
The older pack_isam (for ISAM tables) can not do this.
myisampack is invoked like this:
shell> myisampack [options] filename ...
Each filename should be the name of an index (`.MYI') file. If you are not in the database directory, you should specify the pathname to the file. It is permissible to omit the `.MYI' extension.
myisampack supports the following options:
-b, --backup
tbl_name.OLD.
-#, --debug=debug_options
debug_options string often is
'd:t:o,filename'.
-f, --force
myisampack creates a temporary file named `tbl_name.TMD'
while it compresses the table. If you kill myisampack, the `.TMD'
file may not be deleted. Normally, myisampack exits with an error if
it finds that `tbl_name.TMD' exists. With --force,
myisampack packs the table anyway.
-?, --help
-j big_tbl_name, --join=big_tbl_name
big_tbl_name. All tables that are to be combined
must be identical (same column names and types, same indexes, etc.).
-p #, --packlength=#
myisampack stores all rows with length pointers of 1, 2, or 3
bytes. In most normal cases, myisampack can determine the right length
value before it begins packing the file, but it may notice during the packing
process that it could have used a shorter length. In this case,
myisampack will print a note that the next time you pack the same file,
you could use a shorter record length.)
-s, --silent
-t, --test
-T dir_name, --tmp_dir=dir_name
-v, --verbose
-V, --version
-w, --wait
mysqld server was
invoked with the --skip-external-locking option, it is not a good idea
to invoke myisampack if the table might be updated during the
packing process.
The sequence of commands shown here illustrates a typical table compression session:
shell> ls -l station.* -rw-rw-r-- 1 monty my 994128 Apr 17 19:00 station.MYD -rw-rw-r-- 1 monty my 53248 Apr 17 19:00 station.MYI -rw-rw-r-- 1 monty my 5767 Apr 17 19:00 station.frm shell> myisamchk -dvv station MyISAM file: station Isam-version: 2 Creation time: 1996-03-13 10:08:58 Recover time: 1997-02-02 3:06:43 Data records: 1192 Deleted blocks: 0 Datafile: Parts: 1192 Deleted data: 0 Datafile pointer (bytes): 2 Keyfile pointer (bytes): 2 Max datafile length: 54657023 Max keyfile length: 33554431 Recordlength: 834 Record format: Fixed length table description: Key Start Len Index Type Root Blocksize Rec/key 1 2 4 unique unsigned long 1024 1024 1 2 32 30 multip. text 10240 1024 1 Field Start Length Type 1 1 1 2 2 4 3 6 4 4 10 1 5 11 20 6 31 1 7 32 30 8 62 35 9 97 35 10 132 35 11 167 4 12 171 16 13 187 35 14 222 4 15 226 16 16 242 20 17 262 20 18 282 20 19 302 30 20 332 4 21 336 4 22 340 1 23 341 8 24 349 8 25 357 8 26 365 2 27 367 2 28 369 4 29 373 4 30 377 1 31 378 2 32 380 8 33 388 4 34 392 4 35 396 4 36 400 4 37 404 1 38 405 4 39 409 4 40 413 4 41 417 4 42 421 4 43 425 4 44 429 20 45 449 30 46 479 1 47 480 1 48 481 79 49 560 79 50 639 79 51 718 79 52 797 8 53 805 1 54 806 1 55 807 20 56 827 4 57 831 4 shell> myisampack station.MYI Compressing station.MYI: (1192 records) - Calculating statistics normal: 20 empty-space: 16 empty-zero: 12 empty-fill: 11 pre-space: 0 end-space: 12 table-lookups: 5 zero: 7 Original trees: 57 After join: 17 - Compressing file 87.14% shell> ls -l station.* -rw-rw-r-- 1 monty my 127874 Apr 17 19:00 station.MYD -rw-rw-r-- 1 monty my 55296 Apr 17 19:04 station.MYI -rw-rw-r-- 1 monty my 5767 Apr 17 19:00 station.frm shell> myisamchk -dvv station MyISAM file: station Isam-version: 2 Creation time: 1996-03-13 10:08:58 Recover time: 1997-04-17 19:04:26 Data records: 1192 Deleted blocks: 0 Datafile: Parts: 1192 Deleted data: 0 Datafilepointer (bytes): 3 Keyfile pointer (bytes): 1 Max datafile length: 16777215 Max keyfile length: 131071 Recordlength: 834 Record format: Compressed table description: Key Start Len Index Type Root Blocksize Rec/key 1 2 4 unique unsigned long 10240 1024 1 2 32 30 multip. text 54272 1024 1 Field Start Length Type Huff tree Bits 1 1 1 constant 1 0 2 2 4 zerofill(1) 2 9 3 6 4 no zeros, zerofill(1) 2 9 4 10 1 3 9 5 11 20 table-lookup 4 0 6 31 1 3 9 7 32 30 no endspace, not_always 5 9 8 62 35 no endspace, not_always, no empty 6 9 9 97 35 no empty 7 9 10 132 35 no endspace, not_always, no empty 6 9 11 167 4 zerofill(1) 2 9 12 171 16 no endspace, not_always, no empty 5 9 13 187 35 no endspace, not_always, no empty 6 9 14 222 4 zerofill(1) 2 9 15 226 16 no endspace, not_always, no empty 5 9 16 242 20 no endspace, not_always 8 9 17 262 20 no endspace, no empty 8 9 18 282 20 no endspace, no empty 5 9 19 302 30 no endspace, no empty 6 9 20 332 4 always zero 2 9 21 336 4 always zero 2 9 22 340 1 3 9 23 341 8 table-lookup 9 0 24 349 8 table-lookup 10 0 25 357 8 always zero 2 9 26 365 2 2 9 27 367 2 no zeros, zerofill(1) 2 9 28 369 4 no zeros, zerofill(1) 2 9 29 373 4 table-lookup 11 0 30 377 1 3 9 31 378 2 no zeros, zerofill(1) 2 9 32 380 8 no zeros 2 9 33 388 4 always zero 2 9 34 392 4 table-lookup 12 0 35 396 4 no zeros, zerofill(1) 13 9 36 400 4 no zeros, zerofill(1) 2 9 37 404 1 2 9 38 405 4 no zeros 2 9 39 409 4 always zero 2 9 40 413 4 no zeros 2 9 41 417 4 always zero 2 9 42 421 4 no zeros 2 9 43 425 4 always zero 2 9 44 429 20 no empty 3 9 45 449 30 no empty 3 9 46 479 1 14 4 47 480 1 14 4 48 481 79 no endspace, no empty 15 9 49 560 79 no empty 2 9 50 639 79 no empty 2 9 51 718 79 no endspace 16 9 52 797 8 no empty 2 9 53 805 1 17 1 54 806 1 3 9 55 807 20 no empty 3 9 56 827 4 no zeros, zerofill(2) 2 9 57 831 4 no zeros, zerofill(1) 2 9
The information printed by myisampack is described here:
normal
empty-space
empty-zero
empty-fill
INTEGER
column may be changed to MEDIUMINT).
pre-space
end-space
table-lookup
ENUM before Huffman compression.
zero
Original trees
After join
After a table has been compressed, myisamchk -dvv prints additional
information about each field:
Type
constant
no endspace
no endspace, not_always
no endspace, no empty
table-lookup
ENUM.
zerofill(n)
n bytes in the value are always 0 and are not
stored.
no zeros
always zero
Huff tree
Bits
After you have run pack_isam/myisampack you must run
isamchk/myisamchk to re-create the index. At this time you
can also sort the index blocks and create statistics needed for
the MySQL optimiser to work more efficiently:
myisamchk -rq --analyze --sort-index table_name.MYI isamchk -rq --analyze --sort-index table_name.ISM
After you have installed the packed table into the MySQL database
directory you should do mysqladmin flush-tables to force mysqld
to start using the new table.
If you want to unpack a packed table, you can do this with the
--unpack option to isamchk or myisamchk.
mysqld-max, An Extended mysqld Server
mysqld-max is the MySQL server (mysqld) configured with
the following configure options:
| Option | Comment |
| --with-server-suffix=-max | Add a suffix to the mysqld version string.
|
| --with-innodb | Support for InnoDB tables in version 3.23. |
| --with-bdb | Support for Berkeley DB (BDB) tables |
| CFLAGS=-DUSE_SYMDIR | Symbolic links support for Windows. |
You can find the MySQL-Max binaries at http://www.mysql.com/downloads/mysql-max-3.23.html.
The Windows MySQL binary distributions includes both the
standard mysqld.exe binary and the mysqld-max.exe binary.
http://www.mysql.com/downloads/mysql-3.23.html.
See section 2.1.2 Installing MySQL on Windows.
Note that as BerkeleyDB (BDB) is not available for all platforms,
so some of the Max binaries may not have support for it.
You can check which table types are supported by doing the following
query:
mysql> SHOW VARIABLES LIKE "have_%"; +------------------+----------+ | Variable_name | Value | +------------------+----------+ | have_bdb | NO | | have_crypt | YES | | have_innodb | YES | | have_isam | YES | | have_raid | NO | | have_symlink | DISABLED | | have_openssl | NO | | have_query_cache | YES | +------------------+----------+
The meaning of the values are:
| Value | Meaning |
YES | The option is activated and usable. |
NO | MySQL is not compiled with support for this option. |
DISABLED | The xxxx option is disabled because one started mysqld with --skip-xxxx or because one didn't start mysqld with all needed options to enable the option. In this case the hostname.err file should contain a reason for why the option is disabled.
|
Note: To be able to create InnoDB tables in MySQL version 3.23
you must edit
your startup options to include at least the innodb_data_file_path
option. See section 7.5.2 InnoDB in MySQL Version 3.23.
To get better performance for BDB tables, you should add some configuration
options for these too. See section 7.6.3 BDB startup options.
safe_mysqld will automatically try to start any mysqld binary
with the -max suffix. This makes it very easy to test out a
another mysqld binary in an existing installation. Just
run configure with the options you want and then install the
new mysqld binary as mysqld-max in the same directory
where your old mysqld binary is. See section 4.7.2 safe_mysqld, The Wrapper Around mysqld.
The mysqld-max RPM uses the above mentioned safe_mysqld
feature. It just installs the mysqld-max executable and
safe_mysqld will automatically use this executable when
safe_mysqld is restarted.
The following table shows which table types our standard MySQL-Max binaries includes:
| System | BDB | InnoDB
|
| AIX 4.3 | N | Y |
| HP-UX 11.0 | N | Y |
| Linux-Alpha | N | Y |
| Linux-Intel | Y | Y |
| Linux-IA64 | N | Y |
| Solaris-Intel | N | Y |
| Solaris-SPARC | Y | Y |
| SCO OSR5 | Y | Y |
| UnixWare | Y | Y |
| Windows/NT | Y | Y |
All MySQL clients that communicate with the server using the
mysqlclient library use the following environment variables:
| Name | Description |
MYSQL_UNIX_PORT | The default socket; used for connections to localhost
|
MYSQL_TCP_PORT | The default TCP/IP port |
MYSQL_PWD | The default password |
MYSQL_DEBUG | Debug-trace options when debugging |
TMPDIR | The directory where temporary tables/files are created |
Use of MYSQL_PWD is insecure.
See section 4.2.8 Connecting to the MySQL Server.
The `mysql' client uses the file named in the MYSQL_HISTFILE
environment variable to save the command-line history. The default value for
the history file is `$HOME/.mysql_history', where $HOME is the
value of the HOME environment variable. See section F Environment Variables.
All MySQL programs take many different options. However, every
MySQL program provides a --help option that you can use
to get a full description of the program's different options. For example, try
mysql --help.
You can override default options for all standard client programs with an option file. section 4.1.2 `my.cnf' Option Files.
The following list briefly describes the client-side MySQL programs:
msql2mysql
mSQL programs to MySQL. It doesn't
handle all cases, but it gives a good start when converting.
mysqlaccess
mysqladmin
mysqladmin can also be used to retrieve version,
process, and status information from the server.
See section 4.8.3 mysqladmin, Administrating a MySQL Server.
mysqldump
mysqldump, Dumping Table Structure and Data.
mysqlimport
LOAD DATA
INFILE. See section 4.8.7 mysqlimport, Importing Data from Text Files.
mysqlshow
replace
msql2mysql, but that has more
general applicability as well. replace changes strings in place in
files or on the standard input. Uses a finite state machine to match longer
strings first. Can be used to swap strings. For example, this command
swaps a and b in the given files:
shell> replace a b b a -- file1 file2 ...
mysql, The Command-line Tool
mysql is a simple SQL shell (with GNU readline capabilities).
It supports interactive and non-interactive use. When used interactively,
query results are presented in an ASCII-table format. When used
non-interactively (for example, as a filter), the result is presented in
tab-separated format. (The output format can be changed using command-line
options.) You can run scripts simply like this:
shell> mysql database < script.sql > output.tab
If you have problems due to insufficient memory in the client, use the
--quick option! This forces mysql to use
mysql_use_result() rather than mysql_store_result() to
retrieve the result set.
Using mysql is very easy. Just start it as follows:
mysql database or mysql --user=user_name --password=your_password database. Type a SQL statement, end it with `;', `\g', or `\G'
and press Enter.
mysql supports the following options:
-?, --help
-A, --no-auto-rehash
--prompt=...
-b, --no-beep
-B, --batch
--character-sets-dir=...
-C, --compress
-#, --debug[=...]
-D, --database=...
--default-character-set=...
-e, --execute=...
-E, --vertical
\G.
-f, --force
-g, --no-named-commands
-G, --enable-named-commands
-i, --ignore-space
-h, --host=...
-H, --html
-X, --xml
-L, --skip-line-numbers
--no-pager
--no-tee
-n, --unbuffered
-N, --skip-column-names
-O, --set-variable var=option
--help lists variables.
Please note that --set-variable is deprecated since MySQL 4.0,
just use --var=option on its own.
-o, --one-database
--pager[=...]
ENV variable PAGER. Valid
pagers are less, more, cat [> filename], etc. See interactive help (\h)
also. This option does not work in batch mode. Pager works only in Unix.
-p[password], --password[=...]
-p you can't have a space between the option and the
password.
-P port_num, --port=port_num
--protocol=(TCP | SOCKET | PIPE | MEMORY)
-q, --quick
-r, --raw
--batch
--reconnect
-s, --silent
-S --socket=...
-t --table
-T, --debug-info
--tee=...
-u, --user=#
-U, --safe-updates[=#], --i-am-a-dummy[=#]
UPDATE and DELETE that uses keys. See below for
more information about this option. You can reset this option if you have
it in your `my.cnf' file by using --safe-updates=0.
-v, --verbose
-V, --version
-w, --wait
You can also set the following variables with -O or
--set-variable; please note that --set-variable
is deprecated since MySQL 4.0, just use --var=option on its own:
| Variable Name | Default | Description |
| connect_timeout | 0 | Number of seconds before timeout connection. |
| max_allowed_packet | 16777216 | Max packetlength to send/receive from to server |
| net_buffer_length | 16384 | Buffer for TCP/IP and socket communication |
| select_limit | 1000 | Automatic limit for SELECT when using --i-am-a-dummy |
| max_join_size | 1000000 | Automatic limit for rows in a join when using --i-am-a-dummy. |
If the mysql client loses connection to the server while
sending it a query, it will immediately and automatically try to
reconnect once to the server and send the query again.
Note that even if it succeeds in reconnecting, as your first
connection has ended, all your previous session objects are lost : temporary
tables, user and session variables. Therefore, the above behaviour may
be dangerous for you, like in this example where the server was shut
down and restarted without you knowing it :
mysql> set @a=1; Query OK, 0 rows affected (0.05 sec) mysql> insert into t values(@a); ERROR 2006: MySQL server has gone away No connection. Trying to reconnect... Connection id: 1 Current database: test Query OK, 1 row affected (1.30 sec) mysql> select * from t; +------+ | a | +------+ | NULL | +------+ 1 row in set (0.05 sec)
The @a user variable has been lost with the connection, and
after the reconnection it is undefined.
To protect from this risk, you can start the mysql client
with the --disable-reconnect option.
If you type 'help' on the command-line, mysql will print out the
commands that it supports:
mysql> help
MySQL commands:
help (\h) Display this text.
? (\h) Synonym for `help'.
clear (\c) Clear command.
connect (\r) Reconnect to the server.
Optional arguments are db and host.
edit (\e) Edit command with $EDITOR.
ego (\G) Send command to mysql server,
display result vertically.
exit (\q) Exit mysql. Same as quit.
go (\g) Send command to mysql server.
nopager (\n) Disable pager, print to stdout.
notee (\t) Don't write into outfile.
pager (\P) Set PAGER [to_pager].
Print the query results via PAGER.
print (\p) Print current command.
prompt (\R) Change your mysql prompt.
quit (\q) Quit mysql.
rehash (\#) Rebuild completion hash.
source (\.) Execute a SQL script file.
Takes a file name as an argument.
status (\s) Get status information from the server.
system (\!) Execute a system shell command.
tee (\T) Set outfile [to_outfile].
Append everything into given outfile.
use (\u) Use another database.
Takes database name as argument.
The edit, nopager, pager, and system commands
work only in Unix.
The status command gives you some information about the
connection and the server you are using. If you are running in the
--safe-updates mode, status will also print the values for
the mysql variables that affect your queries.
A useful startup option for beginners (introduced in MySQL
Version 3.23.11) is --safe-updates (or --i-am-a-dummy for
users that has at some time done a DELETE FROM table_name but
forgot the WHERE clause). When using this option, mysql
sends the following command to the MySQL server when opening
the connection:
SET SQL_SAFE_UPDATES=1,SQL_SELECT_LIMIT=#select_limit#,
SQL_MAX_JOIN_SIZE=#max_join_size#"
where #select_limit# and #max_join_size# are variables that
can be set from the mysql command-line. See section 5.5.6 SET Syntax.
The effect of the above is:
UPDATE or DELETE statement
if you don't have a key constraint in the WHERE part. One can,
however, force an UPDATE/DELETE by using LIMIT:
UPDATE table_name SET not_key_column=# WHERE not_key_column=# LIMIT 1;
#select_limit# rows.
SELECTs that will probably need to examine more than
#max_join_size row combinations will be aborted.
Some useful hints about the mysql client:
Some data is much more readable when displayed vertically, instead of the usual horizontal box type output. For example longer text, which includes new lines, is often much easier to be read with vertical output.
mysql> SELECT * FROM mails WHERE LENGTH(txt) < 300 lIMIT 300,1\G
*************************** 1. row ***************************
msg_nro: 3068
date: 2000-03-01 23:29:50
time_zone: +0200
mail_from: Monty
reply: monty@no.spam.com
mail_to: "Thimble Smith" <tim@no.spam.com>
sbj: UTF-8
txt: >>>>> "Thimble" == Thimble Smith writes:
Thimble> Hi. I think this is a good idea. Is anyone familiar with UTF-8
Thimble> or Unicode? Otherwise, I'll put this on my TODO list and see what
Thimble> happens.
Yes, please do that.
Regards,
Monty
file: inbox-jani-1
hash: 190402944
1 row in set (0.09 sec)
For logging, you can use the tee option. The tee can be
started with option --tee=..., or from the command-line
interactively with command tee. All the data displayed on the
screen will also be appended into a given file. This can be very useful
for debugging purposes also. The tee can be disabled from the
command-line with command notee. Executing tee again
starts logging again. Without a parameter the previous file will be
used. Note that tee will flush the results into the file after
each command, just before the command-line appears again waiting for the
next command.
Browsing, or searching the results in the interactive mode in Unix less,
more, or any other similar program, is now possible with option
--pager[=...]. Without argument, mysql client will look
for environment variable PAGER and set pager to that.
pager can be started from the interactive command-line with
command pager and disabled with command nopager. The
command takes an argument optionally and the pager will be set to
that. Command pager can be called without an argument, but this
requires that the option --pager was used, or the pager
will default to stdout. pager works only in Unix, since it uses
the popen() function, which doesn't exist in Windows. In Windows, the
tee option can be used instead, although it may not be as handy
as pager can be in some situations.
A few tips about pager:
mysql> pager cat > /tmp/log.txtand the results will only go to a file. You can also pass any options for the programs that you want to use with the
pager:
mysql> pager less -n -i -S
mysql> pager cat | tee /dr1/tmp/res.txt | \ tee /dr2/tmp/res2.txt | less -n -i -S
You can also combine the two functions above; have the tee
enabled, pager set to 'less' and you will be able to browse the
results in unix 'less' and still have everything appended into a file
the same time. The difference between Unix tee used with the
pager and the mysql client in-built tee, is that
the in-built tee works even if you don't have the Unix tee
available. The in-built tee also logs everything that is printed
on the screen, where the Unix tee used with pager doesn't
log quite that much. Last, but not least, the interactive tee is
more handy to switch on and off, when you want to log something into a
file, but want to be able to turn the feature off sometimes.
From MySQL version 4.0.2 it is possible to change the prompt in the
mysql command-line client.
You can use the following prompt options:
| Option | Description |
| \v | mysqld version |
| \d | database in use |
| \h | host connected to |
| \p | port connected on |
| \u | username |
| \U | full username@host |
| \\ | `\' |
| \n | new line break |
| \t | tab |
| \ | space |
| \_ | space |
| \R | military hour time (0-23) |
| \r | standard hour time (1-12) |
| \m | minutes |
| \y | two digit year |
| \Y | four digit year |
| \D | full date format |
| \s | seconds |
| \w | day of the week in three letter format (Mon, Tue, ...) |
| \P | am/pm |
| \o | month in number format |
| \O | month in three letter format (Jan, Feb, ...) |
| \c | counter that counts up for each command you do |
`\' followed by any other letter just becomes that letter.
You may set the prompt in the following places:
MYSQL_PS1 environment variable to a prompt string. For
example:
shell> export MYSQL_PS1="(\u@\h) [\d]> "
prompt option in any MySQL configuration file, in the
mysql group. For example:
[mysql] prompt=(\u@\h) [\d]>\_
--prompt option on the command line to mysql.
For example:
shell> mysql --prompt="(\u@\h) [\d]> " (user@host) [database]>
prompt (or \R) command to change your
prompt interactively. For example:
mysql> prompt (\u@\h) [\d]>\_ PROMPT set to '(\u@\h) [\d]>\_' (user@host) [database]> (user@host) [database]> prompt Returning to default PROMPT of mysql> mysql>
mysqladmin, Administrating a MySQL ServerA utility for performing administrative operations. The syntax is:
shell> mysqladmin [OPTIONS] command [command-option] command ...
You can get a list of the options your version of mysqladmin supports
by executing mysqladmin --help.
The current mysqladmin supports the following commands:
create databasename
drop databasename
extended-status
flush-hosts
flush-logs
flush-tables
flush-privileges
kill id,id,...
password
ping
processlist
reload
refresh
shutdown
slave-start
slave-stop
status
variables
version
All commands can be shortened to their unique prefix. For example:
shell> mysqladmin proc stat +----+-------+-----------+----+-------------+------+-------+------+ | Id | User | Host | db | Command | Time | State | Info | +----+-------+-----------+----+-------------+------+-------+------+ | 6 | monty | localhost | | Processlist | 0 | | | +----+-------+-----------+----+-------------+------+-------+------+ Uptime: 10077 Threads: 1 Questions: 9 Slow queries: 0 Opens: 6 Flush tables: 1 Open tables: 2 Memory in use: 1092K Max memory used: 1116K
The mysqladmin status command result has the following columns:
| Column | Description |
| Uptime | Number of seconds the MySQL server has been up. |
| Threads | Number of active threads (clients). |
| Questions | Number of questions from clients since mysqld was started.
|
| Slow queries | Queries that have taken more than long_query_time seconds. See section 4.9.5 The Slow Query Log.
|
| Opens | How many tables mysqld has opened.
|
| Flush tables | Number of flush ..., refresh, and reload commands.
|
| Open tables | Number of tables that are open now. |
| Memory in use | Memory allocated directly by the mysqld code (only available when MySQL is compiled with --with-debug=full).
|
| Max memory used | Maximum memory allocated directly by the mysqld code (only available when MySQL is compiled with --with-debug=full).
|
If you do mysqladmin shutdown on a socket (in other words, on a
the computer where mysqld is running), mysqladmin will
wait until the MySQL pid-file is removed to ensure that
the mysqld server has stopped properly.
mysqlcheck for Table Maintenance and Crash Recovery
Since MySQL version 3.23.38 you will be able to use a new
checking and repairing tool for MyISAM tables. The difference to
myisamchk is that mysqlcheck should be used when the
mysqld server is running, where as myisamchk should be used
when it is not. The benefit is that you no longer have to take the
server down for checking or repairing your tables.
mysqlcheck uses MySQL server commands CHECK,
REPAIR, ANALYZE and OPTIMIZE in a convenient way
for the user.
There are three alternative ways to invoke mysqlcheck:
shell> mysqlcheck [OPTIONS] database [tables] shell> mysqlcheck [OPTIONS] --databases DB1 [DB2 DB3...] shell> mysqlcheck [OPTIONS] --all-databases
So it can be used in a similar way as mysqldump when it
comes to what databases and tables you want to choose.
mysqlcheck does have a special feature compared to the other
clients; the default behaviour, checking tables (-c), can be changed by
renaming the binary. So if you want to have a tool that repairs tables
by default, you should just copy mysqlcheck to your harddrive
with a new name, mysqlrepair, or alternatively make a symbolic
link to mysqlrepair and name the symbolic link as
mysqlrepair. If you invoke mysqlrepair now, it will repair
tables by default.
The names that you can use to change mysqlcheck default behaviour
are here:
mysqlrepair: The default option will be -r mysqlanalyze: The default option will be -a mysqloptimize: The default option will be -o
The options available for mysqlcheck are listed here, please
check what your version supports with mysqlcheck --help.
-A, --all-databases
-1, --all-in-1
-a, --analyze
--auto-repair
-#, --debug=...
--character-sets-dir=...
-c, --check
-C, --check-only-changed
--compress
-?, --help
-B, --databases
--default-character-set=...
-F, --fast
-f, --force
-e, --extended
-h, --host=...
-m, --medium-check
-o, --optimize
-p, --password[=...]
-P, --port=...
--protocol=(TCP | SOCKET | PIPE | MEMORY)
-q, --quick
-r, --repair
-s, --silent
-S, --socket=...
--tables
-u, --user=#
-v, --verbose
-V, --version
mysqldump, Dumping Table Structure and DataUtility to dump a database or a collection of database for backup or for transferring the data to another SQL server (not necessarily a MySQL server). The dump will contain SQL statements to create the table and/or populate the table.
If you are doing a backup on the server, you should consider using
the mysqlhotcopy instead. See section 4.8.6 mysqlhotcopy, Copying MySQL Databases and Tables.
shell> mysqldump [OPTIONS] database [tables] OR mysqldump [OPTIONS] --databases [OPTIONS] DB1 [DB2 DB3...] OR mysqldump [OPTIONS] --all-databases [OPTIONS]
If you don't give any tables or use the --databases or
--all-databases, the whole database(s) will be dumped.
You can get a list of the options your version of mysqldump supports
by executing mysqldump --help.
Note that if you run mysqldump without --quick or
--opt, mysqldump will load the whole result set into
memory before dumping the result. This will probably be a problem if
you are dumping a big database.
Note that if you are using a new copy of the mysqldump program
and you are going to do a dump that will be read into a very old MySQL
server, you should not use the --opt or -e options.
mysqldump supports the following options:
--add-locks
LOCK TABLES before and UNLOCK TABLE after each table dump.
(To get faster inserts into MySQL.)
--add-drop-table
drop table before each create statement.
-A, --all-databases
--databases with all
databases selected.
-a, --all
--allow-keywords
-c, --complete-insert
-C, --compress
-B, --databases
USE db_name; will be included in the output before each new database.
--delayed
INSERT DELAYED command.
-e, --extended-insert
INSERT syntax. (Gives more compact and
faster inserts statements.)
-#, --debug[=option_string]
--help
--fields-terminated-by=...
--fields-enclosed-by=...
--fields-optionally-enclosed-by=...
--fields-escaped-by=...
--lines-terminated-by=...
-T option and have the same
meaning as the corresponding clauses for LOAD DATA INFILE.
See section 6.4.9 LOAD DATA INFILE Syntax.
-F, --flush-logs
-f, --force,
-h, --host=..
localhost.
-l, --lock-tables.
READ LOCAL to allow concurrent inserts in the case of MyISAM
tables.
Please note that when dumping multiple databases, --lock-tables
will lock tables for each database separately. So using this option will
not guarantee your tables will be logically consistent between databases.
Tables in different databases may be dumped in completely different
states.
-K, --disable-keys
/*!40000 ALTER TABLE tb_name DISABLE KEYS */; and
/*!40000 ALTER TABLE tb_name ENABLE KEYS */;
will be put in the output. This will make loading the data into a MySQL
4.0 server faster as the indexes are created after all data are inserted.
-n, --no-create-db
CREATE DATABASE /*!32312 IF NOT EXISTS*/ db_name; will not be put in the
output. The above line will be added otherwise, if a --databases or
--all-databases option was given.
-t, --no-create-info
CREATE TABLE statement).
-d, --no-data
--opt
--quick --add-drop-table --add-locks --extended-insert
--lock-tables. Should give you the fastest possible dump for reading
into a MySQL server.
-pyour_pass, --password[=your_pass]
mysqldump you will be prompted for a password.
-P, --port=...
--protocol=(TCP | SOCKET | PIPE | MEMORY)
-q, --quick
mysql_use_result()
to do this.
-Q, --quote-names
-r, --result-file=...
--single-transaction
BEGIN SQL command before dumping data from
server. It is mostly useful with InnoDB tables and
READ_COMMITTED transaction isolation level, as in this mode it
will dump the consistent state of the database at the time then
BEGIN was issued without blocking any applications.
When using this option you should keep in mind that only transactional
tables will be dumped in a consistent state, e.g., any MyISAM or
HEAP tables dumped while using this option may still change
state.
The --single-transaction option was added in version 4.0.2.
This option is mutually exclusive with the --lock-tables option
as LOCK TABLES already commits a previous transaction internally.
-S /path/to/socket, --socket=/path/to/socket
localhost (which is the
default host).
--tables
-T, --tab=path-to-some-directory
table_name.sql file, that contains the SQL CREATE commands,
and a table_name.txt file, that contains the data, for each give table.
The format of the `.txt' file is made according to the
--fields-xxx and --lines--xxx options.
Note: This option only works if mysqldump is run on the same
machine as the mysqld daemon, and the user/group that mysqld
is running as (normally user mysql, group mysql) needs to have
permission to create/write a file at the location you specify.
-u user_name, --user=user_name
-O var=option, --set-variable var=option
--set-variable
is deprecated since MySQL 4.0, just use --var=option on its own.
-v, --verbose
-V, --version
-w, --where='where-condition'
"--where=user='jimf'" "-wuserid>1" "-wuserid<1"
-X, --xml
-x, --first-slave
--master-data
--first-slave, but also prints some CHANGE MASTER
TO commands which will later make your slave start from the right position
in the master's binlogs, if you have set up your slave using this SQL
dump of the master.
-O net_buffer_length=#, where # < 16M
--extended-insert or --opt), mysqldump will create
rows up to net_buffer_length length. If you increase this
variable, you should also ensure that the max_allowed_packet
variable in the MySQL server is bigger than the
net_buffer_length.
The most normal use of mysqldump is probably for making a backup of
whole databases. See section 4.4.1 Database Backups.
mysqldump --opt database > backup-file.sql
You can read this back into MySQL with:
mysql database < backup-file.sql
or
mysql -e "source /patch-to-backup/backup-file.sql" database
However, it's also very useful to populate another MySQL server with information from a database:
mysqldump --opt database | mysql --host=remote-host -C database
It is possible to dump several databases with one command:
mysqldump --databases database1 [database2 ...] > my_databases.sql
If all the databases are wanted, one can use:
mysqldump --all-databases > all_databases.sql
mysqlhotcopy, Copying MySQL Databases and Tables
mysqlhotcopy is a Perl script that uses LOCK TABLES,
FLUSH TABLES and cp or scp to quickly make a backup
of a database. It's the fastest way to make a backup of the database
or single tables, but it can only be run on the same machine where the
database directories are.
mysqlhotcopy db_name [/path/to/new_directory] mysqlhotcopy db_name_1 ... db_name_n /path/to/new_directory mysqlhotcopy db_name./regex/
mysqlhotcopy supports the following options:
-?, --help
-u, --user=#
-p, --password=#
-P, --port=#
-S, --socket=#
--allowold
--keepold
--noindices
myisamchk -rq..
--method=#
cp or scp).
-q, --quiet
--debug
-n, --dryrun
--regexp=#
--suffix=#
--checkpoint=#
--flushlog
--tmpdir=#
You can use perldoc mysqlhotcopy to get more complete
documentation for mysqlhotcopy.
mysqlhotcopy reads the groups [client] and [mysqlhotcopy]
from the option files.
To be able to execute mysqlhotcopy you need write access to the
backup directory, the SELECT privilege for the tables you are about to
copy and the MySQL RELOAD privilege (to be able to
execute FLUSH TABLES).
mysqlimport, Importing Data from Text Files
mysqlimport provides a command-line interface to the LOAD DATA
INFILE SQL statement. Most options to mysqlimport correspond
directly to the same options to LOAD DATA INFILE.
See section 6.4.9 LOAD DATA INFILE Syntax.
mysqlimport is invoked like this:
shell> mysqlimport [options] database textfile1 [textfile2 ...]
For each text file named on the command-line,
mysqlimport strips any extension from the filename and uses the result
to determine which table to import the file's contents into. For example,
files named `patient.txt', `patient.text', and `patient' would
all be imported into a table named patient.
mysqlimport supports the following options:
-c, --columns=...
LOAD DATA INFILE command,
which is then passed to MySQL. See section 6.4.9 LOAD DATA INFILE Syntax.
-C, --compress
-#, --debug[=option_string]
-d, --delete
--fields-terminated-by=...
--fields-enclosed-by=...
--fields-optionally-enclosed-by=...
--fields-escaped-by=...
--lines-terminated-by=...
LOAD DATA INFILE. See section 6.4.9 LOAD DATA INFILE Syntax.
-f, --force
--force,
mysqlimport exits if a table doesn't exist.
--help
-h host_name, --host=host_name
localhost.
-i, --ignore
--replace option.
-l, --lock-tables
-L, --local
localhost (which is the default host).
-pyour_pass, --password[=your_pass]
mysqlimport you will be prompted for a password.
-P port_num, --port=port_num
--protocol=(TCP | SOCKET | PIPE | MEMORY)
-r, --replace
--replace and --ignore options control handling of input
records that duplicate existing records on unique key values. If you specify
--replace, new rows replace existing rows that have the same unique key
value. If you specify --ignore, input rows that duplicate an existing
row on a unique key value are skipped. If you don't specify either option, an
error occurs when a duplicate key value is found, and the rest of the text
file is ignored.
-s, --silent
-S /path/to/socket, --socket=/path/to/socket
localhost (which is the
default host).
-u user_name, --user=user_name
-v, --verbose
-V, --version
Here is a sample run using mysqlimport:
$ mysql --version mysql Ver 9.33 Distrib 3.22.25, for pc-linux-gnu (i686) $ uname -a Linux xxx.com 2.2.5-15 #1 Mon Apr 19 22:21:09 EDT 1999 i586 unknown $ mysql -e 'CREATE TABLE imptest(id INT, n VARCHAR(30))' test $ ed a 100 Max Sydow 101 Count Dracula . w imptest.txt 32 q $ od -c imptest.txt 0000000 1 0 0 \t M a x S y d o w \n 1 0 0000020 1 \t C o u n t D r a c u l a \n 0000040 $ mysqlimport --local test imptest.txt test.imptest: Records: 2 Deleted: 0 Skipped: 0 Warnings: 0 $ mysql -e 'SELECT * FROM imptest' test +------+---------------+ | id | n | +------+---------------+ | 100 | Max Sydow | | 101 | Count Dracula | +------+---------------+
mysqlshow, Showing Databases, Tables, and Columns
mysqlshow can be used to quickly look at which databases exist,
their tables, and the table's columns.
With the mysql program you can get the same information with the
SHOW commands. See section 4.5.7 SHOW Syntax.
mysqlshow is invoked like this:
shell> mysqlshow [OPTIONS] [database [table [column]]]
Note that in newer MySQL versions, you only see those database/tables/columns for which you have some privileges.
If the last argument contains a shell or SQL wildcard (*,
?, % or _) then only what's matched by the wildcard
is shown. If a database contains underscore(s), those should be escaped
with backslash (some Unix shells will require two), in order to get
tables / columns properly. '*' are converted into SQL '%' wildcard and
'?' into SQL '_' wildcard. This may cause some confusion when you try
to display the columns for a table with a _ as in this case
mysqlshow only shows you the table names that match the pattern.
This is easily fixed by adding an extra % last on the
command-line (as a separate argument).
mysql_config, Get compile options for compiling clients
mysql_config provides you with useful information how to compile
your MySQL client and connect it to MySQL.
mysql_config supports the following options:
--cflags
--libs
--socket
--port
--version
--libmysqld-libs
If you execute mysql_config without any options it will print
all options it supports plus the value of all options:
shell> mysql_config
sage: /usr/local/mysql/bin/mysql_config [OPTIONS]
Options:
--cflags [-I'/usr/local/mysql/include/mysql']
--libs [-L'/usr/local/mysql/lib/mysql' -lmysqlclient -lz -lcrypt -lnsl -lm -L/usr/lib -lssl -lcrypto]
--socket [/tmp/mysql.sock]
--port [3306]
--version [4.0.8-gamma]
--libmysqld-libs [ -L'/usr/local/mysql/lib/mysql' -lmysqld -lpthread -lz -lcrypt -lnsl -lm -lpthread -lrt]
You can use this to compile a MySQL client by as follows:
CFG=/usr/local/mysql/bin/mysql_config sh -c "gcc -o progname `$CFG --cflags` progname.c `$CFG --libs`"
perror, Explaining Error Codes
For most system errors MySQL will, in addition to a internal text message,
also print the system error code in one of the following styles:
message ... (errno: #) or message ... (Errcode: #).
You can find out what the error code means by either examining the
documentation for your system or use the perror utility.
perror prints a description for a system error code, or an MyISAM/ISAM
storage engine (table handler) error code.
perror is invoked like this:
shell> perror [OPTIONS] [ERRORCODE [ERRORCODE...]] Example: shell> perror 13 64 Error code 13: Permission denied Error code 64: Machine is not on the network
Note that the error messages are mostly system dependent!
The mysql client typically is used interactively, like this:
shell> mysql database
However, it's also possible to put your SQL commands in a file and tell
mysql to read its input from that file. To do so, create a text
file `text_file' that contains the commands you wish to execute.
Then invoke mysql as shown here:
shell> mysql database < text_file
You can also start your text file with a USE db_name statement. In
this case, it is unnecessary to specify the database name on the command
line:
shell> mysql < text_file
If you are already running mysql, you can execute a SQL
script file using the source command:
mysql> source filename;
For more information about batch mode, section 3.6 Using mysql in Batch Mode.
MySQL has several different log files that can help you find
out what's going on inside mysqld:
| Log file | Description |
| The error log | Problems encountering starting, running or stopping mysqld.
|
| The isam log | Logs all changes to the ISAM tables. Used only for debugging the isam code. |
| The query log | Established connections and executed queries. |
| The update log | Deprecated: Stores all statements that changes data |
| The binary log | Stores all statements that changes something. Used also for replication |
| The slow log | Stores all queries that took more than long_query_time to execute or didn't use indexes.
|
All logs can be found in the mysqld data directory. You can
force mysqld to reopen the log files (or in some cases
switch to a new log) by executing FLUSH LOGS. See section 4.5.3 FLUSH Syntax.
The error log file contains information indicating when mysqld
was started and stopped and also any critical errors found when running.
If mysqld dies unexpectedly and mysqld_safe needs to
restart mysqld, mysqld_safe will write a restarted
mysqld row in this file. This log also holds a warning if
mysqld notices a table that needs to be automatically checked or
repaired.
On some operating systems, the error log will contain a stack trace
for where mysqld died. This can be used to find out where
mysqld died. See section E.1.4 Using a Stack Trace.
Beginning with MySQL 4.0.10 you can specify where mysqld stores the
error log file with the option --log-error[=filename]. If no file
name is given mysqld will use mysql-data-dir/'hostname'.err on
Unix and `\mysql\data\mysql.err' on Windows.
If you execute flush logs the old file will be prefixed with
--old and mysqld will create a new empty log file.
In older MySQL versions the error log handling was done by
mysqld_safe which redirected the error file to
'hostname'.err. One could change this file name with the option
--err-log=filename.
If you don't specify --log-error or if you use the --console
option the errors will be written to stderr (the terminal).
On Windows, the output is always done to the .err file if
--console is not given.
If you want to know what happens within mysqld, you should start
it with --log[=file]. This will log all connections and queries
to the log file (by default named `'hostname'.log'). This log can
be very useful when you suspect an error in a client and want to know
exactly what mysqld thought the client sent to it.
Older versions of the mysql.server script (from MySQL 3.23.4 to 3.23.8)
pass safe_mysqld a --log option (enable general query log).
If you need better performance when you start using MySQL in a production
environment, you can remove the --log option from mysql.server
or change it to --log-bin. See section 4.9.4 The Binary Log.
The entries in this log are written as mysqld receives the questions.
This may be different from the order in which the statements are executed.
This is in contrast to the update log and the binary log which are written
after the query is executed, but before any locks are released.
Note: the update log is replaced by the binary log. See section 4.9.4 The Binary Log. With this you can do anything that you can do with the update log. The update log will be removed in MySQL 5.0.
When started with the --log-update[=file_name] option,
mysqld writes a log file containing all SQL commands that update
data. If no filename is given, it defaults to the name of the host
machine. If a filename is given, but it doesn't contain a path, the file
is written in the data directory. If `file_name' doesn't have an
extension, mysqld will create log file names like so:
`file_name.###', where ### is a number that is incremented each
time you execute mysqladmin refresh, execute mysqladmin
flush-logs, execute the FLUSH LOGS statement, or restart the server.
Note: for the above scheme to work, you must not create your own files with the same filename as the update log + some extensions that may be regarded as a number, in the directory used by the update log!
If you use the --log or -l options, mysqld writes a
general log with a filename of `hostname.log', and restarts and
refreshes do not cause a new log file to be generated (although it is closed
and reopened). In this case you can copy it (on Unix) by doing:
mv hostname.log hostname-old.log mysqladmin flush-logs cp hostname-old.log to-backup-directory rm hostname-old.log
Update logging is smart because it logs only statements that really update
data. So an UPDATE or a DELETE with a WHERE that finds no
rows is not written to the log. It even skips UPDATE statements that
set a column to the value it already has.
The update logging is done immediately after a query completes but before any locks are released or any commit is done. This ensures that the log will be logged in the execution order.
If you want to update a database from update log files, you could do the following (assuming your update logs have names of the form `file_name.###'):
shell> ls -1 -t -r file_name.[0-9]* | xargs cat | mysql
ls is used to get all the log files in the right order.
This can be useful if you have to revert to backup files after a crash and you want to redo the updates that occurred between the time of the backup and the crash.
The intention is that the binary log should replace the update log, so we recommend you to switch to this log format as soon as possible! The update log will be removed in MySQL 5.0.
The binary log contains all information that is available in the update log in a more efficient format. It also contains information about how long each query took that updated the database. It doesn't contain queries that don't modify any data. If you want to log all queries (for example to find a problem query) you should use the general query log. See section 4.9.2 The General Query Log.
The binary log is also used when you are replicating a slave from a master. See section 4.10 Replication in MySQL.
When started with the --log-bin[=file_name] option, mysqld
writes a log file containing all SQL commands that update data. If no
file name is given, it defaults to the name of the host machine followed
by -bin. If file name is given, but it doesn't contain a path, the
file is written in the data directory.
If you supply an extension to --log-bin=filename.extension, the
extension will be silenty removed.
To the binary log filename mysqld will append an extension that
is a number that is incremented each time you execute mysqladmin
refresh, execute mysqladmin flush-logs, execute the FLUSH
LOGS statement or restart the server. A new binary log will also
automatically be created when the current one's size reaches
max_binlog_size. Note if you are using
transactions: a transaction is written in one chunk to the binary log,
hence it is never split between several binary logs. Therefore, if you
have big transactions, you may see binlogs bigger than max_binlog_size.
You can delete all binary log files with the RESET MASTER
command (see section 4.5.4 RESET Syntax), or only some of them with
PURGE [MASTER] LOGS (see section 4.10.7 SQL Commands Related to Replication).
You can use the following options to mysqld to affect what is logged
to the binary log:
| Option | Description |
binlog-do-db=database_name |
Tells the master that it should log updates to the binary log if the
current database
(i.e. the one selected by USE)
database is 'database_name'. All others
databases which are not explicitly mentioned are ignored.
Note that if you use this you should ensure that you only do updates in
the current database.
(Example: binlog-do-db=some_database)
Example of what does not work as you could expect it: if the server is
started with binlog-do-db=sales, and you do
USE prices; UPDATE sales.january SET amount=amount+1000;,
this query will not be written into the binary log.
|
binlog-ignore-db=database_name |
Tells the master that updates where the current database
(i.e. the one selected by USE) is
'database_name' should not be stored in the binary log. Note that if
you use this you should ensure that you only do updates in the current
database.
(Example: binlog-ignore-db=some_database)
Example of what does not work as you could expect it: if the server is
started with binlog-ignore-db=sales, and you do
USE prices; UPDATE sales.january SET amount=amount+1000;,
this query will be written into the binary log.
|
To be able to know which different binary log files have been used,
mysqld will also create a binary log index file that
contains the name of all used binary log files. By default this has the
same name as the binary log file, with the extension '.index'.
You can change the name of the binary log index file with the
--log-bin-index=[filename] option.
You should not manually edit this file while mysqld is running;
doing this would confuse mysqld.
If you are using replication, you should not delete old binary log
files until you are sure that no slave will ever need to use them.
One way to do this is to do mysqladmin flush-logs once a day and then
remove any logs that are more than 3 days old. You can remove them
manually, or preferably using PURGE [MASTER] LOGS
(see section 4.10.7 SQL Commands Related to Replication) which will also safely update the binary
log index file for you (and which can take a date argument since
MySQL 4.1)
A connexion with the SUPER privilege can disable the binary
logging of its queries using SET
SQL_LOG_BIN=0. See section 4.10.7 SQL Commands Related to Replication.
You can examine the binary log file with the mysqlbinlog command.
For example, you can update a MySQL server from the binary log
as follows:
shell> mysqlbinlog log-file | mysql -h server_name
You can also use the mysqlbinlog program to read the binary log
directly from a remote MySQL server!
mysqlbinlog --help will give you more information of how to use
this program!
If you are using BEGIN [WORK] or SET AUTOCOMMIT=0, you must
use the MySQL binary log for backups instead of the old update log,
which will be removed in MySQL 5.0.
The binary logging is done immediately after a query completes but before any locks are released or any commit is done. This ensures that the log will be logged in the execution order.
Updates to non-transactional tables are stored in the binary log
immediately after execution. For transactional tables such as BDB
or InnoDB tables, all updates (UPDATE, DELETE
or INSERT) that change tables are cached until a COMMIT
command is sent to the server. At this point mysqld writes the whole
transaction to the binary log before the COMMIT is executed.
Every thread will, on start, allocate a buffer of binlog_cache_size
to buffer queries. If a query is bigger than this, the thread will open
a temporary file to store the transaction. The temporary file will
be deleted when the thread ends.
The max_binlog_cache_size (default 4G) can be used to restrict the
total size used to cache a multi-query transaction. If a transaction is
bigger than this it will fail and roll back.
If you are using the update or binary log, concurrent inserts will
be converted to normal inserts when using CREATE ... SELECT or
INSERT ... SELECT.
This is to ensure that you can recreate an exact copy of your tables by
applying the log on a backup.
When started with the --log-slow-queries[=file_name] option,
mysqld writes a log file containing all SQL commands that took
more than long_query_time to execute. The time to get the initial
table locks are not counted as execution time.
The slow query log is logged after the query is executed and after all locks has been released. This may be different from the order in which the statements are executed.
If no file name is given, it defaults to the name of the host machine
suffixed with -slow.log. If a filename is given, but doesn't
contain a path, the file is written in the data directory.
The slow query log can be used to find queries that take a long time to
execute and are thus candidates for optimisation. With a large log, that
can become a difficult task. You can pipe the slow query log through the
mysqldumpslow command to get a summary of the queries which
appear in the log.
You are using --log-long-format then also queries that are not
using indexes are printed. See section 4.1.1 mysqld Command-line Options.
The MySQL Server can create a number of different log files, which make it easy to see what is going on. See section 4.9 The MySQL Log Files. One must however regularly clean up these files, to ensure that the logs don't take up too much disk space.
When using MySQL with log files, you will, from time to time, want to remove/backup old log files and tell MySQL to start logging on new files. See section 4.4.1 Database Backups.
On a Linux (Red Hat) installation, you can use the
mysql-log-rotate script for this. If you installed MySQL
from an RPM distribution, the script should have been installed
automatically. Note that you should be careful with this if you are using
the log for replication!
On other systems you must install a short script yourself that you
start from cron to handle log files.
You can force MySQL to start using new log files by using
mysqladmin flush-logs or by using the SQL command FLUSH LOGS.
If you are using MySQL Version 3.21 you must use mysqladmin refresh.
The above command does the following:
--log) or slow query logging
(--log-slow-queries) is used, closes and reopens the log file
(`mysql.log' and ``hostname`-slow.log' as default).
--log-update) is used, closes the update log and
opens a new log file with a higher sequence number.
If you are using only an update log, you only have to flush the logs and then move away the old update log files to a backup. If you are using the normal logging, you can do something like:
shell> cd mysql-data-directory shell> mv mysql.log mysql.old shell> mysqladmin flush-logs
and then take a backup and remove `mysql.old'.
This section describes the various replication features in MySQL. It serves as a reference to the options available with replication. You will be introduced to replication and learn how to implement it. Toward the end, there are some frequently asked questions and descriptions of problems and how to solve them.
We suggest that you visit our website at http://www.mysql.com/ often and read updates to this section. Replication is constantly being improved, and we update the manual frequently with the most current information.
One-way replication can be used is to increase both robustness and speed. For robustness you can have two systems and can switch to the backup if you have problems with the master. The extra speed is achieved by sending a part of the non-updating queries to the replica server. Of course this only works if non-updating queries dominate, but that is the normal case.
Starting in Version 3.23.15, MySQL supports one-way replication internally. One server acts as the master, while the other acts as the slave. Note that one server could play the roles of master in one pair and slave in the other. The master server keeps a binary log of updates (see section 4.9.4 The Binary Log) and an index file to binary logs to keep track of log rotation. The slave, upon connecting, informs the master where it left off since the last successfully propagated update, catches up on the updates, and then blocks and waits for the master to notify it of the new updates.
Note that if you are using replicating all updates to the tables you replicate should be done through the master, unless you are always careful of avoiding conflicts between updates which users issue on the master and those which users issue on the slave.
Another benefit of using replication is that one can get non-disturbing backups of the system by doing a backup on a slave instead of doing it on the master. See section 4.4.1 Database Backups.
MySQL replication is based on the server keeping track of all changes to your database (updates, deletes, etc) in the binary log (see section 4.9.4 The Binary Log) and the slave server(s) reading the saved queries from the master server's binary log so that the slave can execute the same queries on its copy of the data.
It is very important to realise that the binary log is simply a record starting from a fixed point in time (the moment you enable binary logging). Any slaves which you set up will need copies of the data from your master as it existed the moment that you enabled binary logging on the master. If you start your slaves with data that doesn't agree with what was on the master when the binary log was started, your slaves may fail.
Please see the following table for an indication of master-slave compatibility between different versions. With regard to version 4.0, we recommend using same version on both sides.
| Master | Master | Master | Master | ||
| 3.23.33 and up | 4.0.0 | 4.0.1 | 4.0.3 and up | ||
| Slave | 3.23.33 and up | yes | no | no | no |
| Slave | 4.0.0 | no | yes | no | no |
| Slave | 4.0.1 | yes | no | yes | no |
| Slave | 4.0.3 and up | yes | no | no | yes |
Note: MySQL Version 4.0.2 is not recommended for replication.
Starting from 4.0.0, one can use LOAD DATA FROM MASTER to set up
a slave. Be aware that LOAD DATA FROM MASTER currently works only
if all the tables on the master are MyISAM type, and will acquire a
global read lock, so no writes are possible while the tables are being
transferred from the master. When we implement hot lock-free table
backup (in MySQL 5.0), this global read lock will no longer be necessary.
Due to the above limitation, we recommend that at this point you use
LOAD DATA FROM MASTER only if the dataset on the master is relatively
small, or if a prolonged read lock on the master is acceptable. While the
actual speed of LOAD DATA FROM MASTER may vary from system to system,
a good rule for a rough estimate of how long it is going to take is 1 second
per 1 MB of the datafile. You will get close to the estimate if both master
and slave are equivalent to 700 MHz Pentium, are connected through
100 MBit/s network, and your index file is about half the size of your data
file. Of course, this is only a rough order of magnitude estimate.
Once a slave is properly configured and running, it will simply connect
to the master and wait for updates to process. If the master goes away
or the slave loses connectivity with your master, it will keep trying to
connect every master-connect-retry seconds until it is able to
reconnect and resume listening for updates.
Each slave keeps track of where it left off. The master server has no knowledge of how many slaves there are or which ones are up-to-date at any given time.
Three threads are involved in replication : one on the master and two
on the slave.
When START SLAVE is issued, the I/O thread is created on the
slave. It connects to the master and asks it to send its binlogs. Then
one thread (named Binlog_dump in SHOW PROCESSLIST on the
master) is created on the master to send these binlogs. The I/O thread
reads what Binlog_dump sends and simply copies it to some local
files in the slave's data directory called relay logs.
The last thread, the SQL thread, is created on the slave; it reads the
relay logs and executes the queries it contains.
Here is how the three threads show up in SHOW PROCESSLIST:
| 76 | root | localhost | NULL | Binlog Dump | 42 | Slave: waiting for binlog update | NULL |
| 7 | system user | | NULL | Connect | 3 | Reading master update | NULL | | 8 | system user | | NULL | Connect | 3 | Slave: waiting for binlog update | NULL |
Here thread 76 is on the master. Thread 7 is the I/O thread on the
slave.
Thread 8 the SQL thread on the slave; note that the value in the
Time column can tell how late the slave is compared to the
master (see section 4.10.8 Replication FAQ).
Before MySQL 4.0.2, the I/O and SQL threads were one. The advantage brought by the two separate threads is that it makes the reading job and the execution job independant, thus the reading job is not slowed down by the execution job. As soon as the slave starts, even if it has not been running for a while, the I/O thread can quickly fetch all the binlogs, while the SQL thread lags far behind and may take hours to catch. If the slave stops, though it has not executed everything yet, at least it has fetched everything, so binlogs can be purged on the master, as a safe copy is locally stored on the slave for future use.
Relay logs are by default named as the hostname followed
by -relay-bin plus a numeric extension. A `-relay-bin.index' file
contains the list of all relay logs currently in use.
By default these files are in the slave's data directory.
Relay logs have the same format than binary logs, so they can be read
with mysqlbinlog.
A relay log is automatically deleted by the SQL thread as soon as it
no longer needs it (i.e. as soon as it has executed all its
events). The user or DBA has no command to delete relay logs or
even rotate them (FLUSH LOGS has no effect on relay logs),
as the SQL thread does the job.
A new relay log is created when the I/O thread starts and when the
size of the current relay log exceeds max_binlog_size.
Replication also creates two small files in the data directory:
these files are the disk images of the output of SHOW SLAVE
STATUS (see section 4.10.7 SQL Commands Related to Replication for a description of this command);
but as disk images they survive slave's shutdown; this way at restart
time the slave
still knows his master and where the slave is in the master's binlogs,
and where it is in its own relay logs.
SHOW SLAVE STATUS:
| Line# | Description |
| 1 | Master_Log_File
|
| 2 | Read_Master_Log_Pos
|
| 3 | Master_Host
|
| 4 | Master_User
|
| 5 | Password (not in SHOW SLAVE STATUS)
|
| 6 | Master_Port
|
| 7 | Connect_Retry
|
SHOW SLAVE STATUS:
| Line# | Description |
| 1 | Relay_Log_File
|
| 2 | Relay_Log_Pos
|
| 3 | Relay_Master_Log_File
|
| 4 | Exec_master_log_pos
|
Here is a quick description of how to set up complete replication on your current MySQL server. It assumes you want to replicate all your databases and have not configured replication before. You will need to shutdown your master server briefly to complete the steps outlined here.
While this method is the most straightforward way to set up a slave, it is not the only one. For example, if you already have a snapshot of the master, and the master already has server id set and binary logging enabled, you can set up a slave without shutting the master down or even blocking the updates. For more details, please see section 4.10.8 Replication FAQ.
If you want to be able to administrate a MySQL replication setup, we suggest that you read this entire chapter through and try all commands mentioned in section 4.10.7 SQL Commands Related to Replication. You should also familiarise yourself with replication startup options in `my.cnf' in section 4.10.6 Replication Options in `my.cnf'.
FILE
(in MySQL versions older than 4.0.2) or REPLICATION SLAVE
privilege in newer MySQL versions. You must also have given this user
permission to connect from all the slaves. If the user is only doing replication
(which is recommended), you don't need to grant any additional privileges.
For example, to create a user named repl which can access your
master from any host, you might use this command:
mysql> GRANT FILE ON *.* TO repl@"%" IDENTIFIED BY '<password>'; # master < 4.0.2
mysql> GRANT REPLICATION SLAVE ON *.* TO repl@"%" IDENTIFIED BY '<password>'; # master >= 4.0.2If you plan to use the
LOAD TABLE FROM MASTER or
LOAD DATA FROM MASTER commands, you will also need to grant the
REPLICATION CLIENT (or SUPER if the
master is older than 4.0.13) and RELOAD privileges on the
master to the above user.
FLUSH TABLES WITH READ LOCK command.
mysql> FLUSH TABLES WITH READ LOCK;and then take a snapshot of the data on your master server. The easiest way to do this is to simply use an archiving program (
tar on Unix, PowerArchiver, WinRAR,
WinZIP or any similar software on Windows) to
produce an archive of the databases in your master's data directory.
Include all the databases you want to replicate.
tar -cvf /tmp/mysql-snapshot.tar /path/to/data-dirIf you want to replicate only a database called
this_db, you
can do just this:
tar -cvf /tmp/mysql-snapshot.tar /path/to/data-dir/this_dbYou may not want to replicate the
mysql database, then you can
exclude it from the archive too. Into the archive you needn't copy the
master's binary logs, error log,
`master.info' / `relay-log.info' / relay logs
(if the master is itself a slave of another machine). You can exclude
all this from the archive.
After or during the process of taking a snapshot, read the value of the
current binary log name and the offset on the master:
mysql > SHOW MASTER STATUS; +---------------+----------+--------------+-------------------------------+ | File | Position | Binlog_do_db | Binlog_ignore_db | +---------------+----------+--------------+-------------------------------+ | mysql-bin.003 | 73 | test,bar | foo,manual,sasha_likes_to_run | +---------------+----------+--------------+-------------------------------+ 1 row in set (0.06 sec)The
File column shows the name of the log, while Position shows
the offset. In the above example, the binary log value is
mysql-bin.003 and the offset is 73. Record the values - you will need
to use them later when you are setting up the slave.
Once you have taken the snapshot and recorded the log name and offset, you can
re-enable write activity on the master:
mysql> UNLOCK TABLES;If you are using InnoDB tables, ideally you should use the InnoDB Hot Backup tool that is available to those who purchase MySQL commercial licenses, support, or the backup tool itself. It will take a consistent snapshot without acquiring any locks on the master server, and record the log name and offset corresponding to the snapshot to be later used on the slave. More information about the tool is avalaible at http://www.innodb.com/hotbackup.html. Without the hot backup tool, the quickest way to take a snapshot of InnoDB tables is to shut the master server down and copy the InnoDB data files and logs, and the table definition files (
.frm). To record the current log file
name and offset, you should do the following before you shut down the server:
mysql> FLUSH TABLES WITH READ LOCK; mysql> SHOW MASTER STATUS;And then record the log name and the offset from the output of
SHOW MASTER STATUS as was shown earlier. Once you have recorded the
log name and the offset, shut the server down without unlocking the tables to
make sure it goes down with the snapshot corresponding to the current log file
and offset:
shell> mysqladmin -uroot shutdownAn alternative for both MyISAM and InnoDB tables is taking a SQL dump of the master instead of a binary copy like above; for this you can use
mysqldump --master-data
on your master and later run this SQL dump into your slave. This is
however slower than doing a binary copy.
If the master has been previously running without log-bin enabled,
the values of log name and position displayed by SHOW MASTER
STATUS or mysqldump will be empty. In that case, record empty
string ('') for the log name, and 4 for the offset.
log-bin if it is not there already
and server-id=unique number in the [mysqld] section. If those
options are not present, add them and restart the server.
It is very important that the id of the slave is different from
the id of the master. Think of server-id as something similar
to the IP address - it uniquely identifies the server instance in the
community of replication partners.
[mysqld] log-bin server-id=1
server-id=<some unique number between 1 and 2^32-1 that is different from that of the master>replacing the values in <> with what is relevant to your system.
server-id must be different for each server participating in
replication. If you don't specify a server-id, it will be set to 1 if
you have not defined master-host, else it will be set to 2. Note
that in the case of server-id omission the master will refuse
connections from all slaves, and the slave will refuse to connect to a
master. Thus, omitting server-id is only good for backup with a
binary log.
skip-slave-start.
You may want to start the slave server with option
log-warnings, this way you will get more messages about
network/connection problems for example.
mysqldump into the
mysql). Make
sure that the privileges on the files and directories are correct. The
user which MySQL runs as needs to be able to read and write to
them, just as on the master.
mysql> CHANGE MASTER TO MASTER_HOST='<master host name>', MASTER_USER='<replication user name>', MASTER_PASSWORD='<replication password>', MASTER_LOG_FILE='<recorded log file name>', MASTER_LOG_POS=<recorded log offset>;replacing the values in <> with the actual values relevant to your system.
mysql> START SLAVE;
After you have done the above, the slave(s) should connect to the master and catch up on any updates which happened since the snapshot was taken.
If you have forgotten to set server-id for the slave you will get
the following error in the error log file:
Warning: one should set server_id to a non-0 value if master_host is set. The server will not act as a slave.
If you have forgotten to do this for the master, the slaves will not be able to connect to the master.
If a slave is not able to replicate for any reason, you will find error messages in the error log on the slave.
Once a slave is replicating, you will find a file called
`master.info' and one called `relay-log.info'
in the data directory. These two files
are used by the slave to keep track of how much
of the master's binary log it has processed. Do not remove or
edit these files, unless you really know what you are doing. Even in that case,
it is preferred that you use CHANGE MASTER TO command.
NOTE: the content of `master.info' overrides some options specified on
the command-line or in `my.cnf' (see section 4.10.6 Replication Options in `my.cnf' for more details).
Now that you have a snapshot, you can use it to set up other slaves. To do so, follow the slave portion of the procedure described above. You do not need to take another snapshot of the master.
Here is an explanation of what is supported and what is not:
AUTO_INCREMENT,
LAST_INSERT_ID(), and TIMESTAMP values.
USER() and LOAD_FILE() functions
are replicated without changes and will thus not work reliably on the
slave. This is also true for CONNECTION_ID() in slave versions
strictly older than 4.1.1.
The new PASSWORD() function in MySQL 4.1, is well
replicated since 4.1.1 masters ; your slaves must be 4.1.0 or above
to replicate it. If you have older slaves and need to replicate
PASSWORD() from your 4.1.x master, you should start your master
with option --old-password.
SQL_MODE, FOREIGN_KEY_CHECKS and TABLE_TYPE
variables are not replicated.
--default-character-set)
on the master and the slave. If not, you may get duplicate key errors on
the slave, because a key that is regarded as unique in the master character
set may not be unique in the slave character set.
BEGIN/COMMIT block, as
the slave will later start at the beginning of the BEGIN block.
This issue is on our TODO and will be fixed in the near future.
FLUSH, ANALYZE, OPTIMIZE, REPAIR commands
are not stored in the binary log and are because
of this not replicated to the slaves. This is not normally a problem as
these commands don't change anything. This does however mean that if you
update the MySQL privilege tables directly without using the
GRANT statement and you replicate the mysql privilege
database, you must do a FLUSH PRIVILEGES on your slaves to put
the new privileges into effect. Also if you use
FLUSH TABLES when renaming a MyISAM table involved in a
MERGE table, you will have to issue FLUSH TABLES
manually on the slave.
Since MySQL 4.1.1, these commands are written to the binary log
(except FLUSH LOGS, FLUSH MASTER, FLUSH SLAVE,
FLUSH TABLES WITH READ LOCK) unless you specify
NO_WRITE_TO_BINLOG (or its alias LOCAL)
(for a syntax example see section 4.5.3 FLUSH Syntax).
STOP SLAVE, check Slave_open_temp_tables variable to see
if it is 0, if so issue mysqladmin shutdown. If the number is
not 0, restart the slave threads with START SLAVE and see if
you have better luck next time. We have plans to fix this in the near future.
log-slave-updates enabled.
Note, however, that many queries will not work right in this kind of
setup unless your client code is written to take care of the potential
problems that can happen from updates that occur in different sequence
on different servers.
This means that you can do a setup like the following:
A -> B -> C -> AThanks to server ids, which are encoded in the binary log events, A will know when the event it reads had originally been created by A, so A will not execute it and there will be no infinite loop. But this circular setup will only work if you only do non conflicting updates between the tables. In other words, if you insert data in A and C, you should never insert a row in A that may have a conflicting key with a row insert in C. You should also not update the same rows on two servers if the order in which the updates are applied matters.
START SLAVE.
master-connect-retry (default
60) seconds. Because of this, it is safe to shut down the master, and
then restart it after a while. The slave will also be able to deal with
network connectivity outages. However, the slave will notice the
network outage only after receiving no data from the master for
slave_net_timeout seconds. So if your outages are short, you may want
to decrease slave_net_timeout ; see section 4.5.7.4 SHOW VARIABLES.
slave-skip-errors option starting in Version 3.23.47.
BEGIN/COMMIT segment updates to the binary log may be out of sync
if some thread changes the non-transactional table before the
transaction commits. This is because the transaction is written to the
binary log only when it's commited.
COMMIT or not written at all if you use ROLLBACK. You
have to take this into account when updating both transactional tables
and non-transactional tables in the same transaction and you are using
binary logging for backups or replication.
The following table is about problems in 3.23 that are fixed in 4.0:
LOAD DATA INFILE will be handled properly as long as the file
still resides on the master server at the time of update
propagation.
LOAD LOCAL DATA INFILE will be skipped.
RAND() in updates does not replicate properly.
Use RAND(some_non_rand_expr) if you are replicating updates with
RAND(). You can, for example, use UNIX_TIMESTAMP() for the
argument to RAND(). This is fixed in 4.0.
On both master and slave you need to use the server-id option.
This sets a unique replication id. You should pick a unique value in the
range between 1 to 2^32-1 for each master and slave.
Example: server-id=3
The options you can use on the MASTER are all described there: see section 4.9.4 The Binary Log.
The following table describes the options you can use on the SLAVE. It is recommended to read the following paragraph; these options can help you customize replication to suit your needs.
NOTE: replication handles the following options :
in a special way. If no `master.info' file exists (replication
is used for the very first time or you have run RESET SLAVE
and shutdown/restarted the slave server), the slave uses values
specified on the command-line or in `my.cnf'.
But if `master.info' exists, the slave IGNORES
any values specified on the command-line or in `my.cnf',
and uses instead the values it reads from `master.info'.
For example, if you have
master-host=this_host
in your `my.cnf', are using replication, then want to replicate
from another host, modifying the above line in `my.cnf' will have
no effect. You must use CHANGE MASTER TO instead. This holds
true for master-host, master-user, master-password,
master-port, master-connect-retry.
Therefore, you may decide to put no master-* options in
`my.cnf' and instead use only CHANGE MASTER TO
(see section 4.10.7 SQL Commands Related to Replication).
| Option | Description |
log-slave-updates |
Tells the slave to log the updates done by the slave SQL thread to the
slave's binary log. Off by default.
Of course, it requires that the slave be started with binary
logging enabled (log-bin option).
You have to use log-slave-updates to
chain several slaves ; for example for the following setup to work
A -> B ->C(C is a slave of B which is a slave of A) you need to start B with the log-slave-updates option.
|
log-warnings | Makes the slave print more messages about what it is doing. For example, it will warn you that it succeeded in reconnecting after a network/connection failure, and warn you about how each slave thread started. |
master-host=host |
Master hostname or IP address for replication. If not set, the slave
thread will not be started. Note that the setting of master-host
will be ignored if there exists a valid `master.info' file. Probably a
better name for this options would have been something like
bootstrap-master-host, but it is too late to change now.
Example: master-host=db-master.mycompany.com
|
master-user=username |
The username the slave thread will use for authentication when connecting to
the master. The user must have the FILE privilege. If the master user
is not set, user test is assumed. The value in `master.info' will
take precedence if it can be read.
Example: master-user=scott
|
master-password=password |
The password the slave thread will authenticate with when connecting to
the master. If not set, an empty password is assumed.The value in
`master.info' will take precedence if it can be read.
Example: master-password=tiger
|
master-port=portnumber |
The port the master is listening on. If not set, the compiled setting of
MYSQL_PORT is assumed. If you have not tinkered with
configure options, this should be 3306. The value in
`master.info' will take precedence if it can be read.
Example: master-port=3306
|
master-connect-retry=seconds |
The number of seconds the slave thread will sleep before retrying to
connect to the master in case the master goes down or the connection is
lost. Default is 60. The value in `master.info' will
take precedence if it can be read.
Example: master-connect-retry=60
|
master-ssl |
Planned to enable the slave to connect to the master using SSL.
Does nothing yet!
Example: master-ssl
|
master-ssl-key=filename |
Master SSL keyfile name. Only applies if you have
enabled master-ssl. Does nothing yet.
Example: master-ssl-key=SSL/master-key.pem
|
master-ssl-cert=filename |
Master SSL certificate file name. Only applies if
you have enabled master-ssl. Does nothing yet.
Example: master-ssl-cert=SSL/master-cert.pem
|
master-ssl-capath |
Master SSL CA path. Only applies if
you have enabled master-ssl. Does nothing yet.
|
master-ssl-cipher |
Master SSL cipher. Only applies if
you have enabled master-ssl. Does nothing yet.
|
master-info-file=filename | To give `master.info' another name and/or to put it in another directory than the data directory. |
relay-log=filename |
To specify the location and name that should be used for relay logs.
You can use this to have hostname-independant relay log names, or if
your relay logs tend to be big (and you don't want to decrease
max_binlog_size) and you need to put them on some area
different from the data directory, or if you want to increase speed by
balancing load between disks.
|
relay-log-index=filename | To specify the location and name that should be used for the relay logs index file. |
relay-log-info-file=filename | To give `relay-log.info' another name and/or to put it in another directory than the data directory. |
relay-log-purge=0|1 |
Available since MySQL 4.1.1.
Disables/enables automatic purging of relay logs as soon as they are
not needed anymore. This is a global variable which can be dynamically
changed with SET GLOBAL RELAY_LOG_PURGE=0|1. The default value
is 1.
|
relay-log-space-limit=# | To put an upper limit on the total size of all relay logs on the slave. This is useful if you have a small hard disk on your slave machine. When the limit is reached, the I/O thread pauses (does not read the master's binlog) until the SQL thread has catched up and deleted some now unused relay logs. Note that this limit is not absolute: there are cases where the SQL thread needs more events to be able to delete; in that case the I/O thread will overgo the limit until deletion becomes possible. Not doing so would cause a deadlock (which happens before MySQL 4.0.13). |
replicate-do-table=db_name.table_name |
Tells the slave thread to restrict replication to the specified table.
To specify more than one table, use the directive multiple times, once
for each table. This will work for cross-database updates, in
contrast to replicate-do-db.
Example: replicate-do-table=some_db.some_table
|
replicate-ignore-table=db_name.table_name |
Tells the slave thread to not replicate any command that updates the
specified table (even if any other tables may be update by the same
command). To specify more than one table to ignore, use the directive
multiple times, once for each table. This will work for cross-database
updates, in contrast to replicate-ignore-db.
Example: replicate-ignore-table=db_name.some_table
|
replicate-wild-do-table=db_name.table_name |
Tells the slave thread to restrict replication to queries where any of
the updated tables match the specified wildcard pattern. To specify
more than one table, use the directive multiple times, once for each
table. This will work for cross-database updates.
Example: replicate-wild-do-table=foo%.bar% will replicate only
updates that uses a table in any databases that start with foo
and whose table names start with bar.
Note that if you do replicate-wild-do-table=foo%.% then the rule
will be propagated to CREATE DATABASE and DROP DATABASE,
i.e. these two statements will be replicated if the database name
matches the database pattern ('foo%' here) (this magic is triggered by
'%' being the table pattern).
|
replicate-wild-ignore-table=db_name.table_name |
Tells the slave thread to not replicate a query where any table matches the
given wildcard pattern. To specify more than one table to ignore, use
the directive multiple times, once for each table. This will work for
cross-database updates.
Example: replicate-wild-ignore-table=foo%.bar% will not do updates
to tables in databases that start with foo and whose table names start
with bar.
Note that if you do replicate-wild-ignore-table=foo%.% then the
rule will be propagated to CREATE DATABASE and DROP
DATABASE, i.e. these two statements will not be replicated if the
database name matches the database pattern ('foo%' here) (this magic is
triggered by '%' being the table pattern).
|
replicate-do-db=database_name |
Tells the slave to restrict replication to commands where
the current database (i.e. the one selected by USE)
is database_name.
To specify more than one database, use the directive multiple
times, once for each database. Note that this will not replicate
cross-database queries such as UPDATE some_db.some_table
SET foo='bar' while having selected a different or no database. If you
need cross database updates to work, make sure you have 3.23.28 or
later, and use replicate-wild-do-table=db_name.%.
Example: replicate-do-db=some_db
Example of what does not work as you could expect it: if the slave is
started with replicate-do-db=sales, and you do
USE prices; UPDATE sales.january SET amount=amount+1000;,
this query will not be replicated.
If you need cross database updates to work,
use replicate-wild-do-table=db_name.% instead.
The main reason for this ``just-check-the-current-database''
behaviour is that it's hard from the command
alone to know if a query should be replicated or not ; for example if you
are using multi-table-delete or multi-table-update commands
that go across multiple databases. It's also very fast to just check
the current database.
|
replicate-ignore-db=database_name |
Tells the slave to not replicate any command where the current
database (i.e. the one selected by USE)
is database_name. To specify more than one database to
ignore, use the directive multiple times, once for each database.
You should not use this directive if you are using cross table updates
and you don't want these update to be replicated.
Example: replicate-ignore-db=some_db
Example of what does not work as you could expect it: if the slave is
started with replicate-ignore-db=sales, and you do
USE prices; UPDATE sales.january SET amount=amount+1000;,
this query will be replicated.
If you need cross database updates to work,
use replicate-wild-ignore-table=db_name.% instead.
|
replicate-rewrite-db=from_name->to_name |
Tells the slave to translate the current database
(i.e. the one selected by USE)
to to_name if it was from_name on the master.
Only statements involving tables may be affected
(CREATE DATABASE, DROP DATABASE won't),
and only if from_name was the current database on the master.
This will not work for cross-database updates.
Example: replicate-rewrite-db=master_db_name->slave_db_name
|
report-host=host |
Available after 4.0.0. Hostname or IP of the slave to be reported to
the master during slave registration. Will appear in the output of
SHOW SLAVE HOSTS. Leave unset if you do not want the slave to
register itself with the master. Note that it is not sufficient for the
master to simply read the IP of the slave off the socket once the slave
connects. Due to NAT and other routing issues, that IP may not be
valid for connecting to the slave from the master or other hosts. For
the moment this option has no real interest ; it is meant for failover
replication which is not implemented yet.
Example: report-host=slave1.mycompany.com
|
report-port=portnumber | Available after 4.0.0. Port for connecting to slave reported to the master during slave registration. Set it only if the slave is listening on a non-default port or if you have a special tunnel from the master or other clients to the slave. If not sure, leave this option unset. For the moment this option has no real interest ; it is meant for failover replication which is not implemented yet. |
skip-slave-start |
Tells the slave server not to start the slave threads on server startup. The user
can start them later with START SLAVE.
|
slave_compressed_protocol=# | If 1, then use compression on the slave/client protocol if both slave and master support this. |
slave-load-tmpdir=filename |
This option is by default equal to tmpdir.
When the SQL slave replicates a LOAD DATA INFILE command, it
extracts the to-be-loaded file from the relay log into temporary files,
then loads these into the table. If the file loaded on the master was
huge, the temporary files on the slave will be huge too; therefore you
may wish/have to tell the slave to put the temporary files on some
large disk different from tmpdir, using this option. In
that case, you may also use the relay-log option,
as relay logs will be huge too.
|
slave-net-timeout=# |
Number of seconds to wait for more data from the master before aborting
the read, considering the connection broken and retrying to connect,
first time immediately, then every master-connect-retry seconds.
|
slave-skip-errors= [err_code1,err_code2,... | all] |
Tells the slave SQL thread to continue
replication when a query returns an error from the provided
list. Normally, replication will discontinue when an error is
encountered, giving the user a chance to resolve the inconsistency in the
data manually. Do not use this option unless you fully understand why
you are getting the errors. If there are no bugs in your
replication setup and client programs, and no bugs in MySQL itself, you
should never get an abort with error. Indiscriminate use of this option
will result in slaves being hopelessly out of sync with the master and
you having no idea how the problem happened.
For error codes, you should use the numbers provided by the error message in
your slave error log and in the output of SHOW SLAVE STATUS. Full list
of error messages can be found in the source distribution in
`Docs/mysqld_error.txt'.
You can (but should not) also use a very non-recommended value of all
which will ignore all error messages and keep barging along regardless.
Needless to say, if you use it, we make no promises regarding your data
integrity. Please do not complain if your data on the slave is not anywhere
close to what it is on the master in this case -- you have been warned.
Example:
slave-skip-errors=1062,1053 or slave-skip-errors=all
|
Some of these options, like all replicate-* options, can only
be set at the slave server's startup, not on-the-fly. We plan to fix this.
Replication can be controlled through the SQL interface. Here is the summary of commands. Near each command you will find ``(Slave)'', meaning this command is issued on the slave, or ``Master'', meaning it is issued on the master.
START SLAVE (slave)
Starts the slave threads. Was called SLAVE START in MySQL 3.23.
As of MySQL 4.0.2, you can add IO_THREAD or SQL_THREAD
options to the statement to start only the I/O thread or the SQL thread.
The I/O thread reads queries from the master server and stores them in the
relay log. The SQL thread reads the relay log and executes the
queries.
Note that if START SLAVE succeeds in starting the slave threads it
will return without any error. But even in that case it might be that slave
threads start and then later stop (because they don't manage to
connect to the master or read his binlogs or any other
problem). START SLAVE will not warn you about this, you have to
check your slave's `.err' file for error messages generated by
the slave threads, or check that these are running fine with SHOW
SLAVE STATUS.
STOP SLAVE (slave)
Stops the slave threads. Was called SLAVE STOP in MySQL 3.23.
Like SLAVE START, this statement
may be used with IO_THREAD and SQL_THREAD options.
SET SQL_LOG_BIN=0|1 (master)
Disables/enables binary logging for the user's connection
(SQL_LOG_BIN is a session variable)
if the user has the SUPER privilege.
Ignored otherwise.
SET GLOBAL SQL_SLAVE_SKIP_COUNTER=n (slave)
Skip the next n events from the master. Only valid when
the slave thread is not running, otherwise, gives an error. Useful for
recovering from replication stops caused by a statement.
RESET MASTER (master)
Deletes all binary logs listed in the index file, resetting the binlog
index file to be empty. Previously named FLUSH MASTER.
RESET SLAVE (slave)
Makes the slave forget its replication position in the master's binlogs,
deletes the `master.info' and
`relay-log.info' files, all relay logs, starts a new relay log.
Connection information (master-host et al.) is not cleared in
memory and
will be reused if you START SLAVE later. But if you shutdown
the slave server between RESET SLAVE and START SLAVE,
then the connection information in memory will be lost and reread from
the command-line or `my.cnf'. Previously named FLUSH SLAVE.
LOAD TABLE tblname FROM MASTER (slave)
Downloads a copy of the table from master to the slave. Implemented
mainly for debugging of LOAD DATA FROM MASTER.
Requires that the replication user which is used to connect to the master has
RELOAD and SUPER privileges on the master.
Please read the timeout notes in the description of LOAD DATA
FROM MASTER below, they apply here too.
LOAD DATA FROM MASTER (slave)
Takes a snapshot of the master and copies
it to the slave.
Requires that the replication user which is used to connect to the master has
RELOAD and SUPER privileges on the master.
Updates the values of MASTER_LOG_FILE and
MASTER_LOG_POS so that the slave will start replicating from the
correct position. Will honor table and database exclusion rules
specified with replicate-* options. So far works only with
MyISAM tables and acquires a global read lock on the master while
taking the snapshot. In the future it is planned to make it work with
InnoDB tables and to remove the need for global read lock using
the non-blocking online backup feature.
If you are loading big tables, you may have to increase the values
of net_read_timeout and net_write_timeout
on both your master and slave ; see section 4.5.7.4 SHOW VARIABLES.
Note that LOAD DATA FROM MASTER does NOT copy any
tables from the mysql database. This is to make it easy to have
different users and privileges on the master and the slave.
CHANGE MASTER TO master_def_list (slave)
Changes the master parameters (connection information)
to the values specified in master_def_list. master_def_list
is a comma-separated list of master_def where master_def is
one of the following: MASTER_HOST, MASTER_USER,
MASTER_PASSWORD, MASTER_PORT, MASTER_CONNECT_RETRY,
MASTER_LOG_FILE, MASTER_LOG_POS,
RELAY_LOG_FILE, RELAY_LOG_POS (these last two only
starting from MySQL 4.0).
For example:
CHANGE MASTER TO MASTER_HOST='master2.mycompany.com', MASTER_USER='replication', MASTER_PASSWORD='bigs3cret', MASTER_PORT=3306, MASTER_LOG_FILE='master2-bin.001', MASTER_LOG_POS=4, MASTER_CONNECT_RETRY=10;
CHANGE MASTER TO RELAY_LOG_FILE='slave-relay-bin.006', RELAY_LOG_POS=4025;
You only need to specify the values that need to be changed. The values that you omit will stay the same with the exception of when you specify (not necessarily change) the host or port. In that case, the slave will assume that the master is different. Therefore, the old values of log and position are no longer applicable and will automatically be reset to an empty string and 0, respectively (the start values).
This command is useful for setting up a slave when you have the snapshot of
the master and have recorded the log and the offset on the master that the
snapshot corresponds to. You can run
CHANGE MASTER TO MASTER_LOG_FILE='log_name_on_master',
MASTER_LOG_POS=log_offset_on_master on the slave after restoring the
snapshot.
CHANGE MASTER TO deletes all relay logs and starts
a new one, unless you specified RELAY_LOG_FILE or
RELAY_LOG_POS (in that case relay logs will be kept;
since MySQL 4.1.1 the RELAY_LOG_PURGE global variable
will silently be set to 0).
CHANGE MASTER TO updates `master.info' and `relay-log.info'.
The first example above changes the master and master's binlog
coordinates. This is when you want the slave to replicate the master.
The second example, less frequently used, is when the slave has relay logs which, for some
reason, you want the slave to execute again; to do this the master
needn't be reachable, you just have to do CHANGE MASTER TO
and start the SQL thread (START SLAVE SQL_THREAD).
You can even use this out of a replication setup, on a standalone,
slave-of-nobody server, to recover after a crash.
Suppose your server has crashed and you have restored a backup.
You want to replay the server's own binlogs (not relay logs, but regular binary
logs), supposedly named `myhost-bin.*'. First make a backup copy of
these binlogs in some safe place, in case you don't exactly follow the
procedure below and accidentally have the server purge the binlogs.
If using MySQL 4.1.1 or newer, do SET GLOBAL RELAY_LOG_PURGE=0 for additional safety.
Then start the server without log-bin, with a new
(different from before) server id, with relay-log=myhost-bin
(to make the server believe that these regular binlogs are relay
logs) and skip-slave-start,
then issue
CHANGE MASTER TO RELAY_LOG_FILE='myhost-bin.153',RELAY_LOG_POS=410, MASTER_HOST='some_dummy_string'; START SLAVE SQL_THREAD;
Then the server will read and execute its own binlogs, thus achieving
crash recovery.
Once the recovery is finished, run STOP SLAVE, shutdown the
server, delete `master.info' and `relay-log.info',
and restart the server with its original options.
For the moment, specifying MASTER_HOST (even with a dummy value) is compulsory
to make the server think he is a slave, and giving the server a new,
different from before, server id is also compulsory otherwise the
server will see events with its id and think it is in a circular
replication setup and skip the events, which is unwanted. In the
future we plan to add options to get rid of these small constraints.
MASTER_POS_WAIT() (slave)This is not a command but a function, used to ensure that the slave has reached (read and executed up to) a given position in the master's binlog; see section 6.3.6.2 Miscellaneous Functions for a full description.
SHOW MASTER STATUS (master)Provides status information on the binlog of the master.
SHOW SLAVE HOSTS (master)Gives a listing of slaves currently registered with the master.
SHOW SLAVE STATUS (slave)
Provides status information on
essential parameters of the slave threads (Slave). If you type it in the
mysql client, you can put a \G instead of a semi-colon
at the end, to get a vertical, more readable layout:
SLAVE> show slave status\G
*************************** 1. row ***************************
Master_Host: localhost
Master_User: root
Master_Port: 3306
Connect_retry: 3
Master_Log_File: gbichot-bin.005
Read_Master_Log_Pos: 79
Relay_Log_File: gbichot-relay-bin.005
Relay_Log_Pos: 548
Relay_Master_Log_File: gbichot-bin.005
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_do_db:
Replicate_ignore_db:
Last_errno: 0
Last_error:
Skip_counter: 0
Exec_master_log_pos: 79
Relay_log_space: 552
1 row in set (0.00 sec)
Master_Host
the current master host.
Master_User
the current user used to connect to the master.
Master_Port
the current master port.
Connect_Retry
the current value of master-connect-retry.
Master_Log_File
the master's binlog in which the I/O thread is currently reading.
Read_Master_Log_Pos
the position which the I/O thread has read up to in this master's binlog.
Relay_Log_File
the relay log which the SQL thread is currently reading and executing.
Relay_Log_Pos
the position which the SQL thread has read and executed up to in this relay log.
Relay_Master_Log_File
the master's binlog which contains the
last event executed by the SQL thread.
Slave_IO_Running
tells whether the I/O thread is started or not.
Slave_SQL_Running
tells whether the SQL thread is started or not.
Replicate_do_db / Replicate_ignore_db
the lists of the databases which have been specified with option
replicate-do-db / replicate-ignore-db;
starting from version 4.1, options replicate_*_table are also
displayed in four more columns.
Last_errno
the error number returned by the lastly executed query (should be 0).
Last_error
the error message returned by the lastly executed query (should be
empty); if not empty, you will find this message in the slave's error
log too. For example:
Last_errno: 1051 Last_error: error 'Unknown table 'z'' on query 'drop table z'Here the table 'z' existed on the master and was dropped there, but it did not exist on the slave (the user had forgotten to copy it to the slave when setting the slave up), so
DROP TABLE failed on the slave.
Skip_counter
the last used value for SQL_SLAVE_SKIP_COUNTER.
Exec_master_log_pos
the position in the master's binlog (Relay_Master_Log_File)
of the last event executed by the SQL thread.
((Relay_Master_Log_File,Exec_master_log_pos) in the
master's binlog corresponds to
(Relay_Log_File,Relay_Log_Pos)
in the relay log).
Relay_log_space
the total size of all existing relay logs.
SHOW MASTER LOGS (master)
Lists the binary logs on the master. You should use this
command prior to PURGE [MASTER] LOGS to find out how far you
should go.
SHOW BINLOG EVENTS (master)
SHOW BINLOG EVENTS [ IN 'logname' ] [ FROM pos ] [ LIMIT [offset,] rows ]
Shows the events in the binary log.
If you do not specify 'logname', the first binary log will be displayed.
SHOW NEW MASTER FOR SLAVE (slave)
SHOW NEW MASTER FOR SLAVE
WITH MASTER_LOG_FILE='logfile' AND MASTER_LOG_POS=pos AND
MASTER_LOG_SEQ=log_seq AND MASTER_SERVER_ID=server_id
This command is used when a slave of a possibly dead/unavailable master
needs to be switched to replicate off another slave that has been
replicating the same master. The command will return recalculated
replication coordinates (the slave's current binary log file
name and position within that file). The output can be used in a subsequent
CHANGE MASTER TO command. Normal users should never need to run
this command. It is primarily reserved for internal use by the fail-safe
replication code. We may later change the syntax if we find a more
intuitive way to describe this operation.
PURGE [MASTER] LOGS (master)
PURGE [MASTER] LOGS TO 'logname' ; PURGE [MASTER] LOGS BEFORE 'date'
The BEFORE variant is available in MySQL 4.1; its date argument
can be in format 'YYYY-MM-DD HH:MI:SS'.
MASTER is a useless keyword which can be specified or not.
Deletes all the
binary logs that are listed in the log
index as being strictly prior to the specified log or date, and
removes them from the
log index, so that the given log now becomes the first.
Example:
PURGE LOGS TO 'mysql-bin.010' ; PURGE LOGS BEFORE '2003-04-02 22:46:26' ;
This command will do nothing and fail with an error if you have an active slave that is currently reading one of the logs you are trying to delete. However, if you have a dormant slave, and happen to purge one of the logs it wants to read, the slave will be unable to replicate once it comes up. The command is safe to run while slaves are replicating -- you do not need to stop them.
You must first check all the slaves with SHOW SLAVE STATUS to
see which log they are on, then do a listing of the logs on the
master with SHOW MASTER LOGS, find the earliest log among all
the slaves (if all the slaves are up to date, this will be the
last log on the list), backup all the logs you are about to delete
(optional) and purge up to the target log.
Q: How do I configure a slave if the master is already running and I do not want to stop it?
A: There are several options. If you have taken a backup of the
master at some point and recorded the binlog name and offset ( from the
output of SHOW MASTER STATUS ) corresponding to the snapshot, do
the following:
CHANGE MASTER TO MASTER_HOST='master-host-name',
MASTER_USER='master-user-name', MASTER_PASSWORD='master-pass',
MASTER_LOG_FILE='recorded-log-name', MASTER_LOG_POS=recorded_log_pos
SLAVE START
If you do not have a backup of the master already, here is a quick way to do it consistently:
FLUSH TABLES WITH READ LOCK
gtar zcf /tmp/backup.tar.gz /var/lib/mysql ( or a variation of this)
SHOW MASTER STATUS - make sure to record the output - you will need it
later
UNLOCK TABLES
An alternative is taking a SQL dump of the master instead of a binary
copy like above; for this you can use mysqldump --master-data
on your master and later run this SQL dump into your slave. This is
however slower than doing a binary copy.
No matter which of the two ways you used, afterwards follow the instructions for the case when you have a snapshot and have recorded the log name and offset. You can use the same snapshot to set up several slaves. As long as the binary logs of the master are left intact, you can wait as long as several days or in some cases maybe a month to set up a slave once you have the snapshot of the master. In theory the waiting gap can be infinite. The two practical limitations is the diskspace of the master getting filled with old logs, and the amount of time it will take the slave to catch up.
You can also use LOAD DATA FROM
MASTER. This is a convenient command that will take a snapshot,
restore it to the slave, and adjust the log name and offset on the slave
all at once. In the future, LOAD DATA FROM MASTER will be the
recommended way to set up a slave. Be warned, howerver, that the read
lock may be held for a long time if you use this command. It is not yet
implemented as efficiently as we would like to have it. If you have
large tables, the preferred method at this time is still with a local
tar snapshot after executing FLUSH TABLES WITH READ LOCK.
Q: Does the slave need to be connected to the master all the time?
A: No, it does not. You can have the slave go down or stay disconnected for hours or even days, then reconnect, catch up on the updates, and then disconnect or go down for a while again. So you can, for example, use master-slave setup over a dial-up link that is up only for short periods of time. The implications of that are that at any given time the slave is not guaranteed to be in sync with the master unless you take some special measures. In the future, we will have the option to block the master until at least one slave is in sync.
Q: How do I know how late the slave is compared to the master? In other words, how do I know the date of the last query replicated by the slave?
A: This is possible only if the SQL slave thread exists
(i.e. if it shows up in SHOW PROCESSLIST, see section 4.10.3 Replication Implementation Details)
(in MySQL 3.23: if the slave thread exists, i.e. shows up in
SHOW PROCESSLIST),
and if it has executed at least one event
from the master. Indeed, when the SQL slave thread executes an event
read from the master, this thread modifies its own time to the event's
timestamp (this is why TIMESTAMP is well replicated). So in the
Time column in the output of SHOW PROCESSLIST, the
number of seconds displayed for the SQL slave thread is the number of
seconds between the timestamp of the last replicated event and the
real time of the slave machine. You can use this to determine the date
of the last replicated event. Note that if your slave has been
disconnected from the master for one hour, then reconnects,
you may immediately see Time values like 3600 for the SQL slave
thread in SHOW PROCESSLIST... This would be because the slave
is executing queries that are one hour old.
Q: How do I force the master to block updates until the slave catches up?
A: Execute the following commands:
FLUSH TABLES WITH READ LOCK
SHOW MASTER STATUS - record the log name and the offset
SELECT MASTER_POS_WAIT('recorded_log_name', recorded_log_offset)
When the select returns, the slave is currently in sync with the master
UNLOCK TABLES - now the master will continue updates.
Q: What issues should I be aware of when setting up two-way replication?
A: MySQL replication currently does not support any locking protocol between master and slave to guarantee the atomicity of a distributed (cross-server) update. In other words, it is possible for client A to make an update to co-master 1, and in the meantime, before it propagates to co-master 2, client B could make an update to co-master 2 that will make the update of client A work differently than it did on co-master 1. Thus when the update of client A will make it to co-master 2, it will produce tables that will be different from what you have on co-master 1, even after all the updates from co-master 2 have also propagated. So you should not co-chain two servers in a two-way replication relationship, unless you are sure that you updates can safely happen in any order, or unless you take care of mis-ordered updates somehow in the client code.
You must also realise that two-way replication actually does not improve performance very much, if at all, as far as updates are concerned. Both servers need to do the same amount of updates each, as you would have one server do. The only difference is that there will be a little less lock contention, because the updates originating on another server will be serialised in one slave thread. This benefit, though, might be offset by network delays.
Q: How can I use replication to improve performance of my system?
A: You should set up one server as the master, and direct all
writes to it, and configure as many slaves as you have the money and
rackspace for, distributing the reads among the master and the slaves.
You can also start the slaves with --skip-bdb,
--low-priority-updates and --delay-key-write=ALL
to get speed improvements for the slave. In this case the slave will
use non-transactional MyISAM tables instead of BDB tables
to get more speed.
Q: What should I do to prepare my client code to use performance-enhancing replication?
A: If the part of your code that is responsible for database access has been properly abstracted/modularised, converting it to run with the replicated setup should be very smooth and easy -- just change the implementation of your database access to read from some slave or the master, and to always write to the master. If your code does not have this level of abstraction, setting up a replicated system will give you an opportunity/motivation to it clean up. You should start by creating a wrapper library /module with the following functions:
safe_writer_connect()
safe_reader_connect()
safe_reader_query()
safe_writer_query()
safe_ means that the function will take care of handling all
the error conditions.
You should then convert your client code to use the wrapper library.
It may be a painful and scary process at first, but it will pay off in
the long run. All applications that follow the above pattern will be
able to take advantage of one-master/many slaves solution. The
code will be a lot easier to maintain, and adding troubleshooting
options will be trivial. You will just need to modify one or two
functions, for example, to log how long each query took, or which
query, among your many thousands, gave you an error. If you have
written a lot of code already, you may want to automate the conversion
task by using Monty's replace utility, which comes with the
standard distribution of MySQL, or just write your own Perl script.
Hopefully, your code follows some recognisable pattern. If not, then
you are probably better off rewriting it anyway, or at least going
through and manually beating it into a pattern.
Note that, of course, you can use different names for the functions. What is important is having unified interface for connecting for reads, connecting for writes, doing a read, and doing a write.
Q: When and how much can MySQL replication improve the performance of my system?
A: MySQL replication is most beneficial for a system with frequent reads and not so frequent writes. In theory, by using a one master/many slaves setup you can scale by adding more slaves until you either run out of network bandwidth, or your update load grows to the point that the master cannot handle it.
In order to determine how many slaves you can get before the added
benefits begin to level out, and how much you can improve performance
of your site, you need to know your query patterns, and empirically
(by benchmarking) determine the relationship between the throughput
on reads (reads per second, or max_reads) and on writes
max_writes) on a typical master and a typical slave. The
example here will show you a rather simplified calculation of what you
can get with replication for our imagined system.
Let's say our system load consists of 10% writes and 90% reads, and we
have determined that max_reads = 1200 - 2 * max_writes,
or in other words, our system can do 1200 reads per second with no
writes, our average write is twice as slow as average read,
and the relationship is
linear. Let us suppose that our master and slave are of the same
capacity, and we have N slaves and 1 master. Then we have for each
server (master or slave):
reads = 1200 - 2 * writes (from bencmarks)
reads = 9* writes / (N + 1) (reads split, but writes go
to all servers)
9*writes/(N+1) + 2 * writes = 1200
writes = 1200/(2 + 9/(N+1)
So if N = 0, which means we have no replication, our system can handle 1200/11, about 109 writes per second (which means we will have 9 times as many reads due to the nature of our application).
If N = 1, we can get up to 184 writes per second.
If N = 8, we get up to 400.
If N = 17, 480 writes.
Eventually as N approaches infinity (and our budget negative infinity), we can get very close to 600 writes per second, increasing system throughput about 5.5 times. However, with only 8 servers, we increased it almost 4 times already.
Note that our computations assumed infinite network bandwidth, and neglected several other factors that could turn out to be significant on your system. In many cases, you may not be able to make a computation similar to the one above that will accurately predict what will happen on your system if you add N replication slaves. However, answering the following questions should help you decided whether and how much, if at all, the replication will improve the performance of your system:
Q: How can I use replication to provide redundancy/high availability?
A: With the currently available features, you would have to set up a master and a slave (or several slaves), and write a script that will monitor the master to see if it is up, and instruct your applications and the slaves of the master change in case of failure. Some suggestions:
CHANGE MASTER TO command.
bind you can use `nsupdate' to dynamically update your DNS.
log-bin option and without
log-slave-updates. This way the slave will be ready to become a
master as soon as you issue STOP SLAVE; RESET MASTER, and
CHANGE MASTER TO on the other slaves.
We are currently working on integrating an automatic master election system into MySQL, but until it is ready, you will have to create your own monitoring tools.
If you have followed the instructions, and your replication setup is not working, first check the following:
SHOW MASTER STATUS.
If it is, Position will be non-zero. If not, verify that you have
given the master log-bin option and have set server-id.
SHOW SLAVE STATUS and check that the
Slave_IO_Running and Slave_SQL_Running are both ``Yes''.
If not, verify slave options
SHOW PROCESSLIST, find the I/O and SQL threads
(see section 4.10.3 Replication Implementation Details to see how they display),
and check their
State column. If it says connecting to master, verify the
privileges for the replication user on the master, master host name, your
DNS setup, whether the master is actually running, whether it is reachable
from the slave.
SLAVE START.
SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1; SLAVE START; to skip a query that
does not use AUTO_INCREMENT or LAST_INSERT_ID(), or
SET GLOBAL SQL_SLAVE_SKIP_COUNTER=2; SLAVE START; otherwise. The reason
queries that use AUTO_INCREMENT or LAST_INSERT_ID()
are different is that they take two events in the binary log of the master.
When you have determined that there is no user error involved, and replication still either does not work at all or is unstable, it is time to send us a bug report. We need to get as much info as possible from you to be able to track down the bug. Please do spend some time and effort preparing a good bug report.
If you have a repeatable way to demonstrate the bug, use
mysqlbug to prepare a bug report and enter it into our bugs database
at http://bugs.mysql.com/. If you have a phantom -- a problem that
does occur but you cannot duplicate "at will" -- fortunately this
rarely happens:
log-slave-updates and log-bin -- this will keep
a log of all updates on the slave.
SHOW MASTER STATUS on the master at the time
you have discovered the problem
SHOW SLAVE STATUS on the master at the time
you have discovered the problem
mysqlbinlog to examine the binary logs. The following should
be helpful
to find the trouble query, for example:
mysqlbinlog -j pos_from_slave_status /path/to/log_from_slave_status | head
Once you have collected the evidence on the phantom problem, try hard to isolate it into a separate test case first. Then enter the problem into our bugs database at http://bugs.mysql.com/ with as much information as possible.
Optimisation is a complicated task because it ultimately requires understanding of the whole system. While it may be possible to do some local optimisations with small knowledge of your system or application, the more optimal you want your system to become the more you will have to know about it.
This chapter will try to explain and give some examples of different ways to optimise MySQL. Remember, however, that there are always some (increasingly harder) additional ways to make the system even faster.
The most important part for getting a system fast is of course the basic design. You also need to know what kinds of things your system will be doing, and what your bottlenecks are.
The most common bottlenecks are:
When using the MyISAM storage engine, MySQL uses extremely fast table locking (multiple readers / single writers). The biggest problem with this table type is a if you have a mix of a steady stream of updates and slow selects on the same table. If this is a problem with some tables, you can use another table type for these. See section 7 MySQL Table Types.
MySQL can work with both transactional and not transactional tables. To be able to work smoothly with not transactional tables (which can't rollback if something goes wrong), MySQL has the following rules:
NULL in a
NOT NULL column or a too big numerical value in a numerical
column, MySQL will instead of giving an error instead set the column to
the 'best possible value'. For numerical values this is 0, the smallest
possible values or the largest possible value. For strings this is
either the empty string or the longest possible string that can be in
the column.
NULL
For more information about this, see See section 1.8.5 How MySQL deals with constraints.
The above means that one should not use MySQL to check fields content, but one should do this in the application.
Because all SQL servers implement different parts of SQL, it takes work to write portable SQL applications. For very simple selects/inserts it is very easy, but the more you need the harder it gets. If you want an application that is fast with many databases it becomes even harder!
To make a complex application portable you need to choose a number of SQL servers that it should work with.
You can use the MySQL crash-me program/web-page
http://www.mysql.com/information/crash-me.php to find functions,
types, and limits you can use with a selection of database
servers. Crash-me now tests far from everything possible, but it
is still comprehensive with about 450 things tested.
For example, you shouldn't have column names longer than 18 characters if you want to be able to use Informix or DB2.
Both the MySQL benchmarks and crash-me programs are very
database-independent. By taking a look at how we have handled this, you
can get a feeling for what you have to do to write your application
database-independent. The benchmarks themselves can be found in the
`sql-bench' directory in the MySQL source
distribution. They are written in Perl with DBI database interface
(which solves the access part of the problem).
See http://www.mysql.com/information/benchmarks.html for the results from this benchmark.
As you can see in these results, all databases have some weak points. That is, they have different design compromises that lead to different behaviour.
If you strive for database independence, you need to get a good feeling for each SQL server's bottlenecks. MySQL is very fast in retrieving and updating things, but will have a problem in mixing slow readers/writers on the same table. Oracle, on the other hand, has a big problem when you try to access rows that you have recently updated (until they are flushed to disk). Transaction databases in general are not very good at generating summary tables from log tables, as in this case row locking is almost useless.
To get your application really database-independent, you need to define an easy extendable interface through which you manipulate your data. As C++ is available on most systems, it makes sense to use a C++ classes interface to the databases.
If you use some specific feature for some database (like the
REPLACE command in MySQL), you should code a method for
the other SQL servers to implement the same feature (but slower). With
MySQL you can use the /*! */ syntax to add
MySQL-specific keywords to a query. The code inside
/**/ will be treated as a comment (ignored) by most other SQL
servers.
If high performance is more important than exactness, as in some web applications, it is possibile to create an application layer that caches all results to give you even higher performance. By letting old results 'expire' after a while, you can keep the cache reasonably fresh. This provides a method to handle high load spikes, in which case you can dynamically increase the cache and set the expire timeout higher until things get back to normal.
In this case the table creation information should contain information of the initial size of the cache and how often the table should normally be refreshed.
During MySQL initial development, the features of MySQL were made to fit our largest customer. They handle data warehousing for a couple of the biggest retailers in Sweden.
From all stores, we get weekly summaries of all bonus card transactions, and we are expected to provide useful information for the store owners to help them find how their advertisement campaigns are affecting their customers.
The data is quite huge (about 7 million summary transactions per month), and we have data for 4-10 years that we need to present to the users. We got weekly requests from the customers that they want to get 'instant' access to new reports from this data.
We solved this by storing all information per month in compressed 'transaction' tables. We have a set of simple macros (script) that generates summary tables grouped by different criteria (product group, customer id, store ...) from the transaction tables. The reports are web pages that are dynamically generated by a small Perl script that parses a web page, executes the SQL statements in it, and inserts the results. We would have used PHP or mod_perl instead but they were not available at that time.
For graphical data we wrote a simple tool in C that can produce
GIFs based on the result of a SQL query (with some processing of the
result). This is also dynamically executed from the Perl script that
parses the HTML files.
In most cases a new report can simply be done by copying an existing script and modifying the SQL query in it. In some cases, we will need to add more fields to an existing summary table or generate a new one, but this is also quite simple, as we keep all transactions tables on disk. (Currently we have at least 50G of transactions tables and 200G of other customer data.)
We also let our customers access the summary tables directly with ODBC so that the advanced users can themselves experiment with the data.
We haven't had any problems handling this with quite modest Sun Ultra SPARCstation (2x200 Mhz). We recently upgraded one of our servers to a 2 CPU 400 Mhz UltraSPARC, and we are now planning to start handling transactions on the product level, which would mean a ten-fold increase of data. We think we can keep up with this by just adding more disk to our systems.
We are also experimenting with Intel-Linux to be able to get more CPU power cheaper. Now that we have the binary portable database format (new in Version 3.23), we will start to use this for some parts of the application.
Our initial feelings are that Linux will perform much better on low-to-medium load and Solaris will perform better when you start to get a high load because of extreme disk IO, but we don't yet have anything conclusive about this. After some discussion with a Linux Kernel developer, this might be a side effect of Linux giving so much resources to the batch job that the interactive performance gets very low. This makes the machine feel very slow and unresponsive while big batches are going. Hopefully this will be better handled in future Linux Kernels.
This should contain a technical description of the MySQL
benchmark suite (and crash-me), but that description is not
written yet. Currently, you can get a good idea of the benchmark by
looking at the code and results in the `sql-bench' directory in any
MySQL source distributions.
This benchmark suite is meant to be a benchmark that will tell any user what things a given SQL implementation performs well or poorly at.
Note that this benchmark is single threaded, so it measures the minimum time for the operations. We plan to in the future add a lot of multi-threaded tests to the benchmark suite.
For example, (run on the same NT 4.0 machine):
| Reading 2000000 rows by index | Seconds | Seconds |
| mysql | 367 | 249 |
| mysql_odbc | 464 | |
| db2_odbc | 1206 | |
| informix_odbc | 121126 | |
| ms-sql_odbc | 1634 | |
| oracle_odbc | 20800 | |
| solid_odbc | 877 | |
| sybase_odbc | 17614 |
| Inserting (350768) rows | Seconds | Seconds |
| mysql | 381 | 206 |
| mysql_odbc | 619 | |
| db2_odbc | 3460 | |
| informix_odbc | 2692 | |
| ms-sql_odbc | 4012 | |
| oracle_odbc | 11291 | |
| solid_odbc | 1801 | |
| sybase_odbc | 4802 |
In the above test MySQL was run with a 8M index cache.
We have gathered some more benchmark results at http://www.mysql.com/information/benchmarks.html.
Note that Oracle is not included because they asked to be removed. All Oracle benchmarks have to be passed by Oracle! We believe that makes Oracle benchmarks very biased because the above benchmarks are supposed to show what a standard installation can do for a single client.
To run the benchmark suite, you have to download a MySQL source distribution, install the Perl DBI driver, the Perl DBD driver for the database you want to test and then do:
cd sql-bench perl run-all-tests --server=#
where # is one of supported servers. You can get a list of all options
and supported servers by doing run-all-tests --help.
crash-me tries to determine what features a database supports and
what its capabilities and limitations are by actually running
queries. For example, it determines:
VARCHAR column can be
We can find the result from crash-me on a lot of different databases at
http://www.mysql.com/information/crash-me.php.
You should definitely benchmark your application and database to find out where the bottlenecks are. By fixing it (or by replacing the bottleneck with a 'dummy module') you can then easily identify the next bottleneck (and so on). Even if the overall performance for your application is sufficient, you should at least make a plan for each bottleneck, and decide how to solve it if someday you really need the extra performance.
For an example of portable benchmark programs, look at the MySQL benchmark suite. See section 5.1.4 The MySQL Benchmark Suite. You can take any program from this suite and modify it for your needs. By doing this, you can try different solutions to your problem and test which is really the fastest solution for you.
It is very common that some problems only occur when the system is very heavily loaded. We have had many customers who contact us when they have a (tested) system in production and have encountered load problems. In every one of these cases so far, it has been problems with basic design (table scans are not good at high load) or OS/Library issues. Most of this would be a lot easier to fix if the systems were not already in production.
To avoid problems like this, you should put some effort into benchmarking your whole application under the worst possible load! You can use Super Smack for this, and it is available at: http://www.mysql.com/Downloads/super-smack/super-smack-1.0.tar.gz. As the name suggests, it can bring your system down to its knees if you ask it, so make sure to use it only on your development systems.
SELECTs and Other QueriesFirst, one thing that affects all queries: The more complex permission system setup you have, the more overhead you get.
If you do not have any GRANT statements done, MySQL will
optimise the permission checking somewhat. So if you have a very high
volume it may be worth the time to avoid grants. Otherwise, more
permission check results in a larger overhead.
If your problem is with some explicit MySQL function, you can always time this in the MySQL client:
mysql> SELECT BENCHMARK(1000000,1+1); +------------------------+ | BENCHMARK(1000000,1+1) | +------------------------+ | 0 | +------------------------+ 1 row in set (0.32 sec)
The above shows that MySQL can execute 1,000,000 +
expressions in 0.32 seconds on a PentiumII 400MHz.
All MySQL functions should be very optimised, but there may be
some exceptions, and the BENCHMARK(loop_count,expression) is a
great tool to find out if this is a problem with your query.
EXPLAIN Syntax (Get Information About a SELECT)
EXPLAIN tbl_name
or EXPLAIN SELECT select_options
EXPLAIN tbl_name is a synonym for DESCRIBE tbl_name or
SHOW COLUMNS FROM tbl_name.
When you precede a SELECT statement with the keyword EXPLAIN,
MySQL explains how it would process the SELECT, providing
information about how tables are joined and in which order.
With the help of EXPLAIN, you can see when you must add indexes
to tables to get a faster SELECT that uses indexes to find the
records.
You should frequently run ANALYZE TABLE to update table statistics
such as cardinality of keys which can affect the choices the optimiser
makes. See section 4.5.2 ANALYZE TABLE Syntax.
You can also see if the optimiser joins the tables in an optimal
order. To force the optimiser to use a specific join order for a
SELECT statement, add a STRAIGHT_JOIN clause.
For non-simple joins, EXPLAIN returns a row of information for each
table used in the SELECT statement. The tables are listed in the order
they would be read. MySQL resolves all joins using a single-sweep
multi-join method. This means that MySQL reads a row from the first
table, then finds a matching row in the second table, then in the third table
and so on. When all tables are processed, it outputs the selected columns and
backtracks through the table list until a table is found for which there are
more matching rows. The next row is read from this table and the process
continues with the next table.
In MySQL version 4.1 the EXPLAIN output was changed to work better
with constructs like UNIONs, subqueries and derived tables. Most
notable is the addition of two new columns: id and select_type.
Output from EXPLAIN consists of the following columns:
id
SELECT identifier, the sequential number of this SELECT
within the query.
select_type
SELECT clause, which can be any of the following:
SIMPLE
SELECT (without UNIONs or subqueries).
PRIMARY
SELECT.
UNION
UNION SELECTs.
DEPENDENT UNION
UNION SELECTSs, dependent on outer
subquery.
SUBQUERY
SELECT in subquery.
DEPENDENT SUBQUERY
SELECT, dependent on outer subquery.
DERIVED
SELECT.
table
type
system
const join type.
const
const tables are very fast as they are read only once!
eq_ref
const types. It is used when all parts of an index are used by
the join and the index is UNIQUE or a PRIMARY KEY.
ref
ref is used if the join
uses only a leftmost prefix of the key, or if the key is not UNIQUE
or a PRIMARY KEY (in other words, if the join cannot select a single
row based on the key value). If the key that is used matches only a few rows,
this join type is good.
range
key column indicates which index is used.
The key_len contains the longest key part that was used.
The ref column will be NULL for this type.
index
ALL, except that only the index tree is
scanned. This is usually faster than ALL, as the index file is usually
smaller than the datafile.
ALL
const, and usually very bad in all other
cases. You normally can avoid ALL by adding more indexes, so that
the row can be retrieved based on constant values or column values from
earlier tables.
possible_keys
possible_keys column indicates which indexes MySQL
could use to find the rows in this table. Note that this column is
totally independent of the order of the tables. That means that some of
the keys in possible_keys may not be usable in practice with the
generated table order.
If this column is empty, there are no relevant indexes. In this case,
you may be able to improve the performance of your query by examining
the WHERE clause to see if it refers to some column or columns
that would be suitable for indexing. If so, create an appropriate index
and check the query with EXPLAIN again. See section 6.5.4 ALTER TABLE Syntax.
To see what indexes a table has, use SHOW INDEX FROM tbl_name.
key
key column indicates the key (index) that MySQL actually
decided to use. The key is NULL if no index was chosen. To force
MySQL to use an key listed in the possible_keys column, use
USE KEY/IGNORE KEY in your query.
See section 6.4.1 SELECT Syntax.
Also, running myisamchk --analyze (see section 4.4.6.1 myisamchk Invocation Syntax) or
ANALYZE TABLE (see section 4.5.2 ANALYZE TABLE Syntax) on the table will help the
optimiser choose better indexes.
key_len
key_len column indicates the length of the key that
MySQL decided to use. The length is NULL if the
key is NULL. Note that this tells us how many parts of a
multi-part key MySQL will actually use.
ref
ref column shows which columns or constants are used with the
key to select rows from the table.
rows
rows column indicates the number of rows MySQL
believes it must examine to execute the query.
Extra
Distinct
Not exists
LEFT JOIN optimisation on the
query and will not examine more rows in this table for the previous row
combination after it finds one row that matches the LEFT JOIN criteria.
Here is an example for this:
SELECT * FROM t1 LEFT JOIN t2 ON t1.id=t2.id WHERE t2.id IS NULL;Assume that
t2.id is defined with NOT NULL. In this case
MySQL will scan t1 and look up the rows in t2
through t1.id. If MySQL finds a matching row in
t2, it knows that t2.id can never be NULL, and will
not scan through the rest of the rows in t2 that has the same
id. In other words, for each row in t1, MySQL
only needs to do a single lookup in t2, independent of how many
matching rows there are in t2.
range checked for each record (index map: #)
Using filesort
join type and storing the sort key + pointer to
the row for all rows that match the WHERE. Then the keys are
sorted. Finally the rows are retrieved in sorted order.
Using index
Using temporary
ORDER BY on a different column set than you did a GROUP
BY on.
Using where
WHERE clause will be used to restrict which rows will be
matched against the next table or sent to the client. If you don't have
this information and the table is of type ALL or index,
you may have something wrong in your query (if you don't intend to
fetch/examine all rows from the table).
Using filesort and Using temporary.
You can get a good indication of how good a join is by multiplying all values
in the rows column of the EXPLAIN output. This should tell you
roughly how many rows MySQL must examine to execute the query. This
number is also used when you restrict queries with the max_join_size
variable.
See section 5.5.2 Tuning Server Parameters.
The following example shows how a JOIN can be optimised progressively
using the information provided by EXPLAIN.
Suppose you have the SELECT statement shown here, that you examine
using EXPLAIN:
EXPLAIN SELECT tt.TicketNumber, tt.TimeIn,
tt.ProjectReference, tt.EstimatedShipDate,
tt.ActualShipDate, tt.ClientID,
tt.ServiceCodes, tt.RepetitiveID,
tt.CurrentProcess, tt.CurrentDPPerson,
tt.RecordVolume, tt.DPPrinted, et.COUNTRY,
et_1.COUNTRY, do.CUSTNAME
FROM tt, et, et AS et_1, do
WHERE tt.SubmitTime IS NULL
AND tt.ActualPC = et.EMPLOYID
AND tt.AssignedPC = et_1.EMPLOYID
AND tt.ClientID = do.CUSTNMBR;
For this example, assume that:
| Table | Column | Column type |
tt | ActualPC | CHAR(10)
|
tt | AssignedPC | CHAR(10)
|
tt | ClientID | CHAR(10)
|
et | EMPLOYID | CHAR(15)
|
do | CUSTNMBR | CHAR(15)
|
| Table | Index |
tt | ActualPC
|
tt | AssignedPC
|
tt | ClientID
|
et | EMPLOYID (primary key)
|
do | CUSTNMBR (primary key)
|
tt.ActualPC values aren't evenly distributed.
Initially, before any optimisations have been performed, the EXPLAIN
statement produces the following information:
table type possible_keys key key_len ref rows Extra
et ALL PRIMARY NULL NULL NULL 74
do ALL PRIMARY NULL NULL NULL 2135
et_1 ALL PRIMARY NULL NULL NULL 74
tt ALL AssignedPC,ClientID,ActualPC NULL NULL NULL 3872
range checked for each record (key map: 35)
Because type is ALL for each table, this output indicates that
MySQL is doing a full join for all tables! This will take quite a
long time, as the product of the number of rows in each table must be
examined! For the case at hand, this is 74 * 2135 * 74 * 3872 =
45,268,558,720 rows. If the tables were bigger, you can only imagine how
long it would take.
One problem here is that MySQL can't (yet) use indexes on columns
efficiently if they are declared differently. In this context,
VARCHAR and CHAR are the same unless they are declared as
different lengths. Because tt.ActualPC is declared as CHAR(10)
and et.EMPLOYID is declared as CHAR(15), there is a length
mismatch.
To fix this disparity between column lengths, use ALTER TABLE to
lengthen ActualPC from 10 characters to 15 characters:
mysql> ALTER TABLE tt MODIFY ActualPC VARCHAR(15);
Now tt.ActualPC and et.EMPLOYID are both VARCHAR(15).
Executing the EXPLAIN statement again produces this result:
table type possible_keys key key_len ref rows Extra
tt ALL AssignedPC,ClientID,ActualPC NULL NULL NULL 3872 Using where
do ALL PRIMARY NULL NULL NULL 2135
range checked for each record (key map: 1)
et_1 ALL PRIMARY NULL NULL NULL 74
range checked for each record (key map: 1)
et eq_ref PRIMARY PRIMARY 15 tt.ActualPC 1
This is not perfect, but is much better (the product of the rows
values is now less by a factor of 74). This version is executed in a couple
of seconds.
A second alteration can be made to eliminate the column length mismatches
for the tt.AssignedPC = et_1.EMPLOYID and tt.ClientID =
do.CUSTNMBR comparisons:
mysql> ALTER TABLE tt MODIFY AssignedPC VARCHAR(15),
-> MODIFY ClientID VARCHAR(15);
Now EXPLAIN produces the output shown here:
table type possible_keys key key_len ref rows Extra
et ALL PRIMARY NULL NULL NULL 74
tt ref AssignedPC, ActualPC 15 et.EMPLOYID 52 Using where
ClientID,
ActualPC
et_1 eq_ref PRIMARY PRIMARY 15 tt.AssignedPC 1
do eq_ref PRIMARY PRIMARY 15 tt.ClientID 1
This is almost as good as it can get.
The remaining problem is that, by default, MySQL assumes that values
in the tt.ActualPC column are evenly distributed, and that isn't the
case for the tt table. Fortunately, it is easy to tell MySQL
about this:
shell> myisamchk --analyze PATH_TO_MYSQL_DATABASE/tt shell> mysqladmin refresh
Now the join is perfect, and EXPLAIN produces this result:
table type possible_keys key key_len ref rows Extra
tt ALL AssignedPC NULL NULL NULL 3872 Using where
ClientID,
ActualPC
et eq_ref PRIMARY PRIMARY 15 tt.ActualPC 1
et_1 eq_ref PRIMARY PRIMARY 15 tt.AssignedPC 1
do eq_ref PRIMARY PRIMARY 15 tt.ClientID 1
Note that the rows column in the output from EXPLAIN is an
educated guess from the MySQL join optimiser. To optimise a
query, you should check if the numbers are even close to the truth. If not,
you may get better performance by using STRAIGHT_JOIN in your
SELECT statement and trying to list the tables in a different order in
the FROM clause.
In most cases you can estimate the performance by counting disk seeks.
For small tables, you can usually find the row in 1 disk seek (as the
index is probably cached). For bigger tables, you can estimate that
(using B++ tree indexes) you will need: log(row_count) /
log(index_block_length / 3 * 2 / (index_length + data_pointer_length)) +
1 seeks to find a row.
In MySQL an index block is usually 1024 bytes and the data
pointer is usually 4 bytes. A 500,000 row table with an
index length of 3 (medium integer) gives you:
log(500,000)/log(1024/3*2/(3+4)) + 1 = 4 seeks.
As the above index would require about 500,000 * 7 * 3/2 = 5.2M, (assuming that the index buffers are filled to 2/3, which is typical) you will probably have much of the index in memory and you will probably only need 1-2 calls to read data from the OS to find the row.
For writes, however, you will need 4 seek requests (as above) to find where to place the new index and normally 2 seeks to update the index and write the row.
Note that the above doesn't mean that your application will slowly degenerate by log N! As long as everything is cached by the OS or SQL server things will only go marginally slower while the table gets bigger. After the data gets too big to be cached, things will start to go much slower until your applications is only bound by disk-seeks (which increase by log N). To avoid this, increase the index cache as the data grows. See section 5.5.2 Tuning Server Parameters.
SELECT Queries
In general, when you want to make a slow SELECT ... WHERE faster, the
first thing to check is whether you can add an index. See section 5.4.3 How MySQL Uses Indexes. All references between different tables
should usually be done with indexes. You can use the EXPLAIN command
to determine which indexes are used for a SELECT.
See section 5.2.1 EXPLAIN Syntax (Get Information About a SELECT).
Some general tips:
myisamchk
--analyze on a table after it has been loaded with relevant data. This
updates a value for each index part that indicates the average number of
rows that have the same value. (For unique indexes, this is always 1,
of course.) MySQL will use this to decide which index to
choose when you connect two tables with 'a non-constant expression'.
You can check the result from the analyze run by doing SHOW
INDEX FROM table_name and examining the Cardinality column.
myisamchk
--sort-index --sort-records=1 (if you want to sort on index 1). If you
have a unique index from which you want to read all records in order
according to that index, this is a good way to make that faster. Note,
however, that this sorting isn't written optimally and will take a long
time for a large table!
WHERE Clauses
The WHERE optimisations are put in the SELECT part here because
they are mostly used with SELECT, but the same optimisations apply for
WHERE in DELETE and UPDATE statements.
Also note that this section is incomplete. MySQL does many optimisations, and we have not had time to document them all.
Some of the optimisations performed by MySQL are listed here:
((a AND b) AND c OR (((a AND b) AND (c AND d)))) -> (a AND b AND c) OR (a AND b AND c AND d)
(a<b AND b=c) AND a=5 -> b>5 AND b=c AND a=5
(B>=5 AND B=5) OR (B=6 AND 5=5) OR (B=7 AND 5=6) -> B=5 OR B=6
COUNT(*) on a single table without a WHERE is retrieved
directly from the table information for MyISAM and HEAP tables.
This is also done for any NOT NULL expression when used with only one
table.
SELECT statements are impossible and returns no rows.
HAVING is merged with WHERE if you don't use GROUP BY
or group functions (COUNT(), MIN()...).
WHERE is constructed to get a fast
WHERE evaluation for each sub-join and also to skip records as
soon as possible.
WHERE clause on a UNIQUE
index, or a PRIMARY KEY, where all index parts are used with constant
expressions and the index parts are defined as NOT NULL.
mysql> SELECT * FROM t WHERE primary_key=1;
mysql> SELECT * FROM t1,t2
-> WHERE t1.primary_key=1 AND t2.primary_key=t1.id;
ORDER BY and in GROUP
BY come from the same table, then this table is preferred first when
joining.
ORDER BY clause and a different GROUP BY
clause, or if the ORDER BY or GROUP BY contains columns
from tables other than the first table in the join queue, a temporary
table is created.
SQL_SMALL_RESULT, MySQL will use an in-memory
temporary table.
HAVING clause
are skipped.
Some examples of queries that are very fast:
mysql> SELECT COUNT(*) FROM tbl_name;
mysql> SELECT MIN(key_part1),MAX(key_part1) FROM tbl_name;
mysql> SELECT MAX(key_part2) FROM tbl_name
-> WHERE key_part_1=constant;
mysql> SELECT ... FROM tbl_name
-> ORDER BY key_part1,key_part2,... LIMIT 10;
mysql> SELECT ... FROM tbl_name
-> ORDER BY key_part1 DESC,key_part2 DESC,... LIMIT 10;
The following queries are resolved using only the index tree (assuming the indexed columns are numeric):
mysql> SELECT key_part1,key_part2 FROM tbl_name WHERE key_part1=val;
mysql> SELECT COUNT(*) FROM tbl_name
-> WHERE key_part1=val1 AND key_part2=val2;
mysql> SELECT key_part2 FROM tbl_name GROUP BY key_part1;
The following queries use indexing to retrieve the rows in sorted order without a separate sorting pass:
mysql> SELECT ... FROM tbl_name
-> ORDER BY key_part1,key_part2,... ;
mysql> SELECT ... FROM tbl_name
-> ORDER BY key_part1 DESC,key_part2 DESC,... ;
DISTINCT
DISTINCT is converted to a GROUP BY on all columns,
DISTINCT combined with ORDER BY will in many cases also
need a temporary table.
When combining LIMIT # with DISTINCT, MySQL will stop
as soon as it finds # unique rows.
If you don't use columns from all used tables, MySQL will stop the scanning of the not used tables as soon as it has found the first match.
SELECT DISTINCT t1.a FROM t1,t2 where t1.a=t2.a;
In the case, assuming t1 is used before t2 (check with
EXPLAIN), then MySQL will stop reading from t2 (for that
particular row in t1) when the first row in t2 is found.
LEFT JOIN and RIGHT JOIN
A LEFT JOIN B in MySQL is implemented as follows:
B is set to be dependent on table A and all tables
that A is dependent on.
A is set to be dependent on all tables (except B)
that are used in the LEFT JOIN condition.
LEFT JOIN conditions are moved to the WHERE clause.
WHERE optimisations are done.
A that matches the WHERE clause, but there
wasn't any row in B that matched the LEFT JOIN condition,
then an extra B row is generated with all columns set to NULL.
LEFT JOIN to find rows that don't exist in some
table and you have the following test: column_name IS NULL in the
WHERE part, where column_name is a column that is declared as
NOT NULL, then MySQL will stop searching after more rows
(for a particular key combination) after it has found one row that
matches the LEFT JOIN condition.
RIGHT JOIN is implemented analogously as LEFT JOIN.
The table read order forced by LEFT JOIN and STRAIGHT JOIN
will help the join optimiser (which calculates in which order tables
should be joined) to do its work much more quickly, as there are fewer
table permutations to check.
Note that the above means that if you do a query of type:
SELECT * FROM a,b LEFT JOIN c ON (c.key=a.key) LEFT JOIN d (d.key=a.key)
WHERE b.key=d.key
MySQL will do a full scan on b as the LEFT JOIN will force
it to be read before d.
The fix in this case is to change the query to:
SELECT * FROM b,a LEFT JOIN c ON (c.key=a.key) LEFT JOIN d (d.key=a.key)
WHERE b.key=d.key
ORDER BY
In some cases MySQL can uses index to satisfy an ORDER BY or
GROUP BY request without doing any extra sorting.
The index can also be used even if the ORDER BY doesn't match the
index exactly, as long as all the unused index parts and all the extra
are ORDER BY columns are constants in the WHERE
clause. The following queries will use the index to resolve the
ORDER BY / GROUP BY part:
SELECT * FROM t1 ORDER BY key_part1,key_part2,... SELECT * FROM t1 WHERE key_part1=constant ORDER BY key_part2 SELECT * FROM t1 WHERE key_part1=constant GROUP BY key_part2 SELECT * FROM t1 ORDER BY key_part1 DESC,key_part2 DESC SELECT * FROM t1 WHERE key_part1=1 ORDER BY key_part1 DESC,key_part2 DESC
Some cases where MySQL can not use indexes to resolve the ORDER
BY: (Note that MySQL will still use indexes to find the rows that
matches the WHERE clause):
ORDER BY on different keys:
SELECT * FROM t1 ORDER BY key1,key2
ORDER BY using non-consecutive key parts.
SELECT * FROM t1 WHERE key2=constant ORDER BY key_part2
ASC and DESC.
SELECT * FROM t1 ORDER BY key_part1 DESC,key_part2 ASC
ORDER BY:
SELECT * FROM t1 WHERE key2=constant ORDER BY key1
ORDER
BY on are not all from the first not-const table that is used to
retrieve rows (This is the first table in the EXPLAIN output which
doesn't use a const row fetch method).
ORDER BY and GROUP BY expressions.
HASH index in HEAP tables).
In the cases where MySQL have to sort the result, it uses the following algorithm:
WHERE clause are skipped.
sort_buffer).
MERGEBUFF (7) regions to one block in
another temporary file. Repeat until all blocks from the first file
are in the second file.
MERGEBUFF2 (15)
blocks left.
record_rnd_buffer) .
You can with EXPLAIN SELECT ... ORDER BY check if MySQL can use
indexes to resolve the query. If you get Using filesort in the
extra column, then MySQL can't use indexes to resolve the
ORDER BY. See section 5.2.1 EXPLAIN Syntax (Get Information About a SELECT).
If you want to have a higher ORDER BY speed, you should first
see if you can get MySQL to use indexes instead of having to do an extra
sorting phase. If this is not possible, then you can do:
sort_buffer variable.
record_rnd_buffer variable.
tmpdir to point to a dedicated disk with lots of empty space.
If you use MySQL 4.1 or later you can spread load between
several physical disks by setting tmpdir to a list of paths
separated by colon : (semicolon ; on Windows). They
will be used in round-robin fashion.
Note: These paths should end up on different physical disks,
not different partitions of the same disk.
MySQL by default sorts all GROUP BY x,y[,...] queries as if you
would have specified ORDER BY x,y[,...]. MySQL will optimise
away any ORDER BY as above without any speed penalty. If you by
in some cases don't want to have the result sorted you can specify
ORDER BY NULL:
INSERT INTO foo SELECT a,COUNT(*) FROM bar GROUP BY a ORDER BY NULL;
LIMIT
In some cases MySQL will handle the query differently when you are
using LIMIT # and not using HAVING:
LIMIT, MySQL
will use indexes in some cases when it normally would prefer to do a
full table scan.
LIMIT # with ORDER BY, MySQL will end the
sorting as soon as it has found the first # lines instead of sorting
the whole table.
LIMIT # with DISTINCT, MySQL will stop
as soon as it finds # unique rows.
GROUP BY can be resolved by reading the key in order
(or do a sort on the key) and then calculate summaries until the
key value changes. In this case LIMIT # will not calculate any
unnecessary GROUP BYs.
# rows to the client, it
will abort the query (if you are not using SQL_CALC_FOUND_ROWS).
LIMIT 0 will always quickly return an empty set. This is useful
to check the query and to get the column types of the result columns.
LIMIT # is used to calculate how much space is required.
INSERT QueriesThe time to insert a record consists approximately of:
where the numbers are somewhat proportional to the overall time. This does not take into consideration the initial overhead to open tables (which is done once for each concurrently running query).
The size of the table slows down the insertion of indexes by log N (B-trees).
Some ways to speed up inserts:
INSERT statements. This is much faster (many times
in some cases) than using separate INSERT statements. If you are adding
data to non-empty table, you may tune up the bulk_insert_buffer_size
variable to make it even faster.
See section 4.5.7.4 SHOW VARIABLES.
INSERT DELAYED statement. See section 6.4.3 INSERT Syntax.
MyISAM tables you can insert rows at the same time
SELECTs are running if there are no deleted rows in the tables.
LOAD DATA INFILE. This
is usually 20 times faster than using a lot of INSERT statements.
See section 6.4.9 LOAD DATA INFILE Syntax.
LOAD DATA INFILE run even
faster when the table has many indexes. Use the following procedure:
CREATE TABLE. For example, using
mysql or Perl-DBI.
FLUSH TABLES statement or the shell command mysqladmin
flush-tables.
myisamchk --keys-used=0 -rq /path/to/db/tbl_name. This will
remove all usage of all indexes from the table.
LOAD DATA INFILE. This will not
update any indexes and will therefore be very fast.
myisampack
on it to make it smaller. See section 7.1.2.3 Compressed Table Characteristics.
myisamchk -r -q
/path/to/db/tbl_name. This will create the index tree in memory before
writing it to disk, which is much faster because it avoids lots of disk
seeks. The resulting index tree is also perfectly balanced.
FLUSH TABLES statement or the shell command mysqladmin
flush-tables.
LOAD DATA INFILE also does the above optimisation if
you insert into an empty table; the main difference with the above
procedure is that you can let myisamchk allocate much more temporary
memory for the index creation that you may want MySQL to allocate for
every index recreation.
Since MySQL 4.0 you can also use
ALTER TABLE tbl_name DISABLE KEYS instead of
myisamchk --keys-used=0 -rq /path/to/db/tbl_name and
ALTER TABLE tbl_name ENABLE KEYS instead of
myisamchk -r -q /path/to/db/tbl_name. This way you can also skip
FLUSH TABLES steps.
mysql> LOCK TABLES a WRITE; mysql> INSERT INTO a VALUES (1,23),(2,34),(4,33); mysql> INSERT INTO a VALUES (8,26),(6,29); mysql> UNLOCK TABLES;The main speed difference is that the index buffer is flushed to disk only once, after all
INSERT statements have completed. Normally there would
be as many index buffer flushes as there are different INSERT
statements. Locking is not needed if you can insert all rows with a single
statement.
For transactional tables, you should use BEGIN/COMMIT instead of
LOCK TABLES to get a speedup.
Locking will also lower the total time of multi-connection tests, but the
maximum wait time for some threads will go up (because they wait for
locks). For example:
thread 1 does 1000 inserts thread 2, 3, and 4 does 1 insert thread 5 does 1000 insertsIf you don't use locking, 2, 3, and 4 will finish before 1 and 5. If you use locking, 2, 3, and 4 probably will not finish before 1 or 5, but the total time should be about 40% faster. As
INSERT, UPDATE, and DELETE operations are very
fast in MySQL, you will obtain better overall performance by
adding locks around everything that does more than about 5 inserts or
updates in a row. If you do very many inserts in a row, you could do a
LOCK TABLES followed by an UNLOCK TABLES once in a while
(about each 1000 rows) to allow other threads access to the table. This
would still result in a nice performance gain.
Of course, LOAD DATA INFILE is much faster for loading data.
To get some more speed for both LOAD DATA INFILE and
INSERT, enlarge the key buffer. See section 5.5.2 Tuning Server Parameters.
UPDATE Queries
Update queries are optimised as a SELECT query with the additional
overhead of a write. The speed of the write is dependent on the size of
the data that is being updated and the number of indexes that are
updated. Indexes that are not changed will not be updated.
Also, another way to get fast updates is to delay updates and then do many updates in a row later. Doing many updates in a row is much quicker than doing one at a time if you lock the table.
Note that, with dynamic record format, updating a record to
a longer total length may split the record. So if you do this often,
it is very important to OPTIMIZE TABLE sometimes.
See section 4.5.1 OPTIMIZE TABLE Syntax.
DELETE Queries
If you want to delete all rows in the table, you should use
TRUNCATE TABLE table_name. See section 6.4.7 TRUNCATE Syntax.
The time to delete a record is exactly proportional to the number of indexes. To delete records more quickly, you can increase the size of the index cache. See section 5.5.2 Tuning Server Parameters.
Unsorted tips for faster systems:
thread_cache_size variable. See section 5.5.2 Tuning Server Parameters.
EXPLAIN
command. See section 5.2.1 EXPLAIN Syntax (Get Information About a SELECT).
SELECT queries on MyISAM tables that are
updated a lot. This is to avoid problems with table locking.
MyISAM tables can insert rows in a table without deleted
rows at the same time another table is reading from it. If this is important
for you, you should consider methods where you don't have to delete rows
or run OPTIMIZE TABLE after you have deleted a lot of rows.
ALTER TABLE ... ORDER BY expr1,expr2... if you mostly
retrieve rows in expr1,expr2... order. By using this option after big
changes to the table, you may be able to get higher performance.
SELECT * FROM table_name WHERE hash=MD5(CONCAT(col1,col2))
AND col_1='constant' AND col_2='constant'
VARCHAR
or BLOB columns. You will get dynamic row length as soon as you
are using a single VARCHAR or BLOB column. See section 7 MySQL Table Types.
UPDATE table SET count=count+1 WHERE index_column=constant
is very fast!
This is really important when you use MySQL table types like MyISAM and
ISAM that
only have table locking (multiple readers / single writers). This will
also give better performance with most databases, as the row locking
manager in this case will have less to do.
INSERT /*! DELAYED */ when you do not need to know when your
data is written. This speeds things up because many records can be written
with a single disk write.
INSERT /*! LOW_PRIORITY */ when you want your selects to be
more important.
SELECT /*! HIGH_PRIORITY */ to get selects that jump the
queue. That is, the select is done even if there is somebody waiting to
do a write.
INSERT statement to store many rows with one
SQL command (many SQL servers supports this).
LOAD DATA INFILE to load bigger amounts of data. This is
faster than normal inserts and will be even faster when myisamchk
is integrated in mysqld.
AUTO_INCREMENT columns to make unique values.
OPTIMIZE TABLE once in a while to avoid fragmentation when
using a dynamic table format. See section 4.5.1 OPTIMIZE TABLE Syntax.
HEAP tables to get more speed when possible. See section 7 MySQL Table Types.
name instead of
customer_name in the customer table). To make your names portable
to other SQL servers you should keep them shorter than 18 characters.
MyISAM directly, you could
get a speed increase of 2-5 times compared to using the SQL interface.
To be able to do this the data must be on the same server as
the application, and usually it should only be accessed by one process
(because external file locking is really slow). One could eliminate the
above problems by introducing low-level MyISAM commands in the
MySQL server (this could be one easy way to get more
performance if needed). By carefully designing the database interface,
it should be quite easy to support this types of optimisation.
DELAY_KEY_WRITE=1 will make the updating of
indexes faster, as these are not logged to disk until the file is closed.
The downside is that you should run myisamchk on these tables before
you start mysqld to ensure that they are okay if something killed
mysqld in the middle. As the key information can always be generated
from the data, you should not lose anything by using DELAY_KEY_WRITE.
You can find a discussion about different locking methods in the appendix. See section E.4 Locking methods.
All locking in MySQL is deadlock-free, except for InnoDB and
BDB type tables.
This is managed by always
requesting all needed locks at once at the beginning of a query and always
locking the tables in the same order.
InnoDB type tables automatically acquire their row locks and
BDB type tables
their page locks during the processing of SQL statements, not at the start
of the transaction.
The locking method MySQL uses for WRITE locks works as follows:
The locking method MySQL uses for READ locks works as follows:
When a lock is released, the lock is made available to the threads in the write lock queue, then to the threads in the read lock queue.
This means that if you have many updates on a table, SELECT
statements will wait until there are no more updates.
To work around this for the case where you want to do many INSERT and
SELECT operations on a table, you can insert rows in a temporary
table and update the real table with the records from the temporary table
once in a while.
This can be done with the following code:
mysql> LOCK TABLES real_table WRITE, insert_table WRITE; mysql> INSERT INTO real_table SELECT * FROM insert_table; mysql> TRUNCATE TABLE insert_table; mysql> UNLOCK TABLES;
You can use the LOW_PRIORITY options with INSERT,
UPDATE or DELETE or HIGH_PRIORITY with
SELECT if you want to prioritise retrieval in some specific
cases. You can also start mysqld with --low-priority-updates
to get the same behaveour.
Using SQL_BUFFER_RESULT can also help making table locks shorter.
See section 6.4.1 SELECT Syntax.
You could also change the locking code in `mysys/thr_lock.c' to use a single queue. In this case, write locks and read locks would have the same priority, which might help some applications.
The table locking code in MySQL is deadlock free.
MySQL uses table locking (instead of row locking or column
locking) on all table types, except InnoDB and BDB tables,
to achieve a very
high lock speed. For large tables, table locking is much better than
row locking for most applications, but there are, of course, some
pitfalls.
For InnoDB and BDB tables, MySQL only uses table
locking if you explicitly lock the table with LOCK TABLES.
For these table types we recommend you to not use
LOCK TABLES at all, because InnoDB uses automatic
row level locking and BDB uses page level locking to
ensure transaction isolation.
In MySQL Version 3.23.7 and above, you can insert rows into
MyISAM tables at the same time other threads are reading from the
table. Note that currently this only works if there are no holes after
deleted rows in the table at the time the insert is made. When all holes
has been filled with new data, concurrent inserts will automatically be
enabled again.
Table locking enables many threads to read from a table at the same time, but if a thread wants to write to a table, it must first get exclusive access. During the update, all other threads that want to access this particular table will wait until the update is ready.
As updates on tables normally are considered to be more important than
SELECT, all statements that update a table have higher priority
than statements that retrieve information from a table. This should
ensure that updates are not 'starved' because one issues a lot of heavy
queries against a specific table. (You can change this by using
LOW_PRIORITY with the statement that does the update or
HIGH_PRIORITY with the SELECT statement.)
Starting from MySQL Version 3.23.7 one can use the
max_write_lock_count variable to force MySQL to
temporary give all SELECT statements, that wait for a table, a
higher priority after a specific number of inserts on a table.
Table locking is, however, not very good under the following senario:
SELECT that takes a long time to run.
UPDATE on a used table. This client
will wait until the SELECT is finished.
SELECT statement on the same table. As
UPDATE has higher priority than SELECT, this SELECT
will wait for the UPDATE to finish. It will also wait for the first
SELECT to finish!
full disk, in which case all
threads that wants to access the problem table will also be put in a waiting
state until more disk space is made available.
Some possible solutions to this problem are:
SELECT statements to run faster. You may have to create
some summary tables to do this.
mysqld with --low-priority-updates. This will give
all statements that update (modify) a table lower priority than a SELECT
statement. In this case the last SELECT statement in the previous
scenario would execute before the INSERT statement.
INSERT, UPDATE, or DELETE
statement lower priority with the LOW_PRIORITY attribute.
mysqld with a low value for max_write_lock_count to give
READ locks after a certain number of WRITE locks.
SET LOW_PRIORITY_UPDATES=1.
See section 5.5.6 SET Syntax.
SELECT is very important with the
HIGH_PRIORITY attribute. See section 6.4.1 SELECT Syntax.
INSERT combined with SELECT,
switch to use the new MyISAM tables as these support concurrent
SELECTs and INSERTs.
INSERT and SELECT statements, the
DELAYED attribute to INSERT will probably solve your problems.
See section 6.4.3 INSERT Syntax.
SELECT and DELETE, the LIMIT
option to DELETE may help. See section 6.4.6 DELETE Syntax.
MySQL keeps row data and index data in separate files. Many (almost all) other databases mix row and index data in the same file. We believe that the MySQL choice is better for a very wide range of modern systems.
Another way to store the row data is to keep the information for each column in a separate area (examples are SDBM and Focus). This will cause a performance hit for every query that accesses more than one column. Because this degenerates so quickly when more than one column is accessed, we believe that this model is not good for general purpose databases.
The more common case is that the index and data are stored together (like in Oracle/Sybase et al). In this case you will find the row information at the leaf page of the index. The good thing with this layout is that it, in many cases, depending on how well the index is cached, saves a disk read. The bad things with this layout are:
One of the most basic optimisation is to get your data (and indexes) to take as little space on the disk (and in memory) as possible. This can give huge improvements because disk reads are faster and normally less main memory will be used. Indexing also takes less resources if done on smaller columns.
MySQL supports a lot of different table types and row formats. Choosing the right table format may give you a big performance gain. See section 7 MySQL Table Types.
You can get better performance on a table and minimise storage space using the techniques listed here:
MEDIUMINT is often better than INT.
NOT NULL if possible. It makes everything
faster and you save one bit per column. Note that if you really need
NULL in your application you should definitely use it. Just avoid
having it on all columns by default.
VARCHAR,
TEXT, or BLOB columns), a fixed-size record format is
used. This is faster but unfortunately may waste some space.
See section 7.1.2 MyISAM Table Formats.
Indexes are used to find rows with a specific value of one column fast. Without an index MySQL has to start with the first record and then read through the whole table until it finds the relevant rows. The bigger the table, the more this costs. If the table has an index for the columns in question, MySQL can quickly get a position to seek to in the middle of the datafile without having to look at all the data. If a table has 1000 rows, this is at least 100 times faster than reading sequentially. Note that if you need to access almost all 1000 rows it is faster to read sequentially because we then avoid disk seeks.
All MySQL indexes (PRIMARY, UNIQUE, and
INDEX) are stored in B-trees. Strings are automatically prefix-
and end-space compressed. See section 6.5.7 CREATE INDEX Syntax.
Indexes are used to:
WHERE clause.
MAX() or MIN() value for a specific indexed
column. This is optimised by a preprocessor that checks if you are
using WHERE key_part_# = constant on all key parts < N. In this case
MySQL will do a single key lookup and replace the MIN()
expression with a constant. If all expressions are replaced with
constants, the query will return at once:
SELECT MIN(key_part2),MAX(key_part2) FROM table_name where key_part1=10
ORDER BY
key_part_1,key_part_2 ). The key is read in reverse order if all key
parts are followed by DESC. See section 5.2.7 How MySQL Optimises ORDER BY.
SELECT key_part3 FROM table_name WHERE key_part1=1
Suppose you issue the following SELECT statement:
mysql> SELECT * FROM tbl_name WHERE col1=val1 AND col2=val2;
If a multiple-column index exists on col1 and col2, the
appropriate rows can be fetched directly. If separate single-column
indexes exist on col1 and col2, the optimiser tries to
find the most restrictive index by deciding which index will find fewer
rows and using that index to fetch the rows.
If the table has a multiple-column index, any leftmost prefix of the
index can be used by the optimiser to find rows. For example, if you
have a three-column index on (col1,col2,col3), you have indexed
search capabilities on (col1), (col1,col2), and
(col1,col2,col3).
MySQL can't use a partial index if the columns don't form a
leftmost prefix of the index. Suppose you have the SELECT
statements shown here:
mysql> SELECT * FROM tbl_name WHERE col1=val1; mysql> SELECT * FROM tbl_name WHERE col2=val2; mysql> SELECT * FROM tbl_name WHERE col2=val2 AND col3=val3;
If an index exists on (col1,col2,col3), only the first query
shown above uses the index. The second and third queries do involve
indexed columns, but (col2) and (col2,col3) are not
leftmost prefixes of (col1,col2,col3).
MySQL also uses indexes for LIKE comparisons if the argument
to LIKE is a constant string that doesn't start with a wildcard
character. For example, the following SELECT statements use indexes:
mysql> SELECT * FROM tbl_name WHERE key_col LIKE "Patrick%"; mysql> SELECT * FROM tbl_name WHERE key_col LIKE "Pat%_ck%";
In the first statement, only rows with "Patrick" <= key_col <
"Patricl" are considered. In the second statement, only rows with
"Pat" <= key_col < "Pau" are considered.
The following SELECT statements will not use indexes:
mysql> SELECT * FROM tbl_name WHERE key_col LIKE "%Patrick%"; mysql> SELECT * FROM tbl_name WHERE key_col LIKE other_col;
In the first statement, the LIKE value begins with a wildcard
character. In the second statement, the LIKE value is not a
constant.
MySQL 4.0 does another optimisation on LIKE. If you use
... LIKE "%string%" and string is longer than 3 characters,
MySQL will use the Turbo Boyer-Moore algorithm to initialise the
pattern for the string and then use this pattern to perform the search
quicker.
Searching using column_name IS NULL will use indexes if column_name
is an index.
MySQL normally uses the index that finds the least number of rows. An
index is used for columns that you compare with the following operators:
=, >, >=, <, <=, BETWEEN, and a
LIKE with a non-wildcard prefix like 'something%'.
Any index that doesn't span all AND levels in the WHERE clause
is not used to optimise the query. In other words: To be able to use an
index, a prefix of the index must be used in every AND group.
The following WHERE clauses use indexes:
... WHERE index_part1=1 AND index_part2=2 AND other_column=3
... WHERE index=1 OR A=10 AND index=2 /* index = 1 OR index = 2 */
... WHERE index_part1='hello' AND index_part_3=5
/* optimised like "index_part1='hello'" */
... WHERE index1=1 and index2=2 or index1=3 and index3=3;
/* Can use index on index1 but not on index2 or index 3 */
These WHERE clauses do NOT use indexes:
... WHERE index_part2=1 AND index_part3=2 /* index_part_1 is not used */
... WHERE index=1 OR A=10 /* Index is not used in
both AND parts */
... WHERE index_part1=1 OR index_part2=10 /* No index spans all rows */
Note that in some cases MySQL will not use an index, even if one would be available. Some of the cases where this happens are:
LIMIT to only retrieve
part of the rows, MySQL will use an index anyway, as it can
much more quickly find the few rows to return in the result.
All MySQL column types can be indexed. Use of indexes on the
relevant columns is the best way to improve the performance of SELECT
operations.
The maximum number of keys and the maximum index length is defined per storage engine. See section 7 MySQL Table Types. You can with all storage engines have at least 16 keys and a total index length of at least 256 bytes.
For CHAR and VARCHAR columns, you can index a prefix of a
column. This is much faster and requires less disk space than indexing the
whole column. The syntax to use in the CREATE TABLE statement to
index a column prefix looks like this:
KEY index_name (col_name(length))
The example here creates an index for the first 10 characters of the
name column:
mysql> CREATE TABLE test (
-> name CHAR(200) NOT NULL,
-> KEY index_name (name(10)));
For BLOB and TEXT columns, you must index a prefix of the
column. You cannot index the entire column.
In MySQL Version 3.23.23 or later, you can also create special
FULLTEXT indexes. They are used for full-text search. Only the
MyISAM table type supports FULLTEXT indexes. They can be
created only from CHAR, VARCHAR, and TEXT columns.
Indexing always happens over the entire column and partial indexing is not
supported. See section 6.8 MySQL Full-text Search for details.
MySQL can create indexes on multiple columns. An index may
consist of up to 15 columns. (On CHAR and VARCHAR columns you
can also use a prefix of the column as a part of an index.)
A multiple-column index can be considered a sorted array containing values that are created by concatenating the values of the indexed columns.
MySQL uses multiple-column indexes in such a way that queries are
fast when you specify a known quantity for the first column of the index in a
WHERE clause, even if you don't specify values for the other columns.
Suppose a table is created using the following specification:
mysql> CREATE TABLE test (
-> id INT NOT NULL,
-> last_name CHAR(30) NOT NULL,
-> first_name CHAR(30) NOT NULL,
-> PRIMARY KEY (id),
-> INDEX name (last_name,first_name));
Then the index name is an index over last_name and
first_name. The index will be used for queries that specify
values in a known range for last_name, or for both last_name
and first_name.
Therefore, the name index will be used in the following queries:
mysql> SELECT * FROM test WHERE last_name="Widenius";
mysql> SELECT * FROM test WHERE last_name="Widenius"
-> AND first_name="Michael";
mysql> SELECT * FROM test WHERE last_name="Widenius"
-> AND (first_name="Michael" OR first_name="Monty");
mysql> SELECT * FROM test WHERE last_name="Widenius"
-> AND first_name >="M" AND first_name < "N";
However, the name index will NOT be used in the following queries:
mysql> SELECT * FROM test WHERE first_name="Michael";
mysql> SELECT * FROM test WHERE last_name="Widenius"
-> OR first_name="Michael";
For more information on the manner in which MySQL uses indexes to improve query performance, see section 5.4.3 How MySQL Uses Indexes.
When you run mysqladmin status, you'll see something like this:
Uptime: 426 Running threads: 1 Questions: 11082 Reloads: 1 Open tables: 12
This can be somewhat perplexing if you only have 6 tables.
MySQL is multi-threaded, so it may have many queries on the same table
simultaneously. To minimise the problem with two threads having
different states on the same file, the table is opened independently by
each concurrent thread. This takes some memory but will normaly increase
performance. With ISAM and MyISAM tables this also requires
one extra file descriptor for the datafile. With these table types the index
file descriptor is shared between all threads.
You can read more about this topic in the next section. See section 5.4.7 How MySQL Opens and Closes Tables.
table_cache, max_connections, and max_tmp_tables
affect the maximum number of files the server keeps open. If you
increase one or both of these values, you may run up against a limit
imposed by your operating system on the per-process number of open file
descriptors. However, you can increase the limit on many systems.
Consult your OS documentation to find out how to do this, because the
method for changing the limit varies widely from system to system.
table_cache is related to max_connections. For example,
for 200 concurrent running connections, you should have a table cache of
at least 200 * n, where n is the maximum number of tables
in a join. You also need to reserve some extra file descriptors for
temporary tables and files.
Make sure that your operating system can handle the number of open file
descriptors implied by the table_cache setting. If
table_cache is set too high, MySQL may run out of file
descriptors and refuse connections, fail to perform queries, and be very
unreliable. You also have to take into account that the MyISAM storage
engine needs two file descriptors for each unique open table. You can
in increase the number of file descriptors available for MySQL with
the --open-files-limit=# startup option. See section A.2.16 File Not Found.
The cache of open tables will be keept at a level of table_cache
entries (default 64; this can be changed with the -O
table_cache=# option to mysqld). Note that in MySQL may
temporarly open even more tables to be able to execute queries.
A not used table is closed and removed from the table cache under the following circumstances:
table_cache entries and
a thread is no longer using a table.
mysqladmin refresh or
mysqladmin flush-tables.
FLUSH TABLES statement.
When the table cache fills up, the server uses the following procedure to locate a cache entry to use:
A table is opened for each concurrent access. This means that
if you have two threads accessing the same table or access the table
twice in the same query (with AS) the table needs to be opened twice.
The first open of any table takes two file descriptors; each additional
use of the table takes only one file descriptor. The extra descriptor
for the first open is used for the index file; this descriptor is shared
among all threads.
If you are opening a table with the HANDLER table_name OPEN
statement, a dedicated table object is allocated for the thread.
This table object is not shared by other threads an will not be closed
until the thread calls HANDLER table_name CLOSE or the thread dies.
See section 6.4.2 HANDLER Syntax. When this happens, the table is put
back in the table cache (if it isn't full).
You can check if your table cache is too small by checking the mysqld
variable Opened_tables. If this is quite big, even if you
haven't done a lot of FLUSH TABLES, you should increase your table
cache. See section 4.5.7.3 SHOW STATUS.
If you have many files in a directory, open, close, and create operations will
be slow. If you execute SELECT statements on many different tables,
there will be a little overhead when the table cache is full, because for
every table that has to be opened, another must be closed. You can reduce
this overhead by making the table cache larger.
We start with the system level things since some of these decisions have to be made very early. In other cases a fast look at this part may suffice because it not that important for the big gains. However, it is always nice to have a feeling about how much one could gain by changing things at this level.
The default OS to use is really important! To get the most use of multiple-CPU machines one should use Solaris (because the threads works really nice) or Linux (because the 2.2 kernel has really good SMP support). Also on 32-bit machines Linux has a 2G file-size limit by default. Hopefully this will be fixed soon when new filesystems are released (XFS/Reiserfs). If you have a desperate need for files bigger than 2G on Linux-intel 32 bit, you should get the LFS patch for the ext2 filesystem.
Because we have not run MySQL in production on that many platforms, we advice you to test your intended platform before choosing it, if possible.
--skip-external-locking MySQL option to avoid external
locking. Note that this will not impact MySQL's functionality as
long as you only run one server. Just remember to take down the server (or
lock relevant parts) before you run myisamchk. On some system
this switch is mandatory because the external locking does not work in any
case.
The --skip-external-locking option is on by default when compiling with
MIT-pthreads, because flock() isn't fully supported by
MIT-pthreads on all platforms. It's also on default for Linux
as Linux file locking are not yet safe.
The only case when you can't use --skip-external-locking is if you run
multiple MySQL servers (not clients) on the same data,
or run myisamchk on the table without first flushing and locking
the mysqld server tables first.
You can still use LOCK TABLES/UNLOCK TABLES even if you
are using --skip-external-locking
You can get the default buffer sizes used by the mysqld server
with this command:
shell> mysqld --help
This command produces a list of all mysqld options and configurable
variables. The output includes the default values and looks something
like this:
Possible variables for option --set-variable (-O) are: back_log current value: 5 bdb_cache_size current value: 1048540 binlog_cache_size current value: 32768 connect_timeout current value: 5 delayed_insert_timeout current value: 300 delayed_insert_limit current value: 100 delayed_queue_size current value: 1000 flush_time current value: 0 interactive_timeout current value: 28800 join_buffer_size current value: 131072 key_buffer_size current value: 1048540 lower_case_table_names current value: 0 long_query_time current value: 10 max_allowed_packet current value: 1048576 max_binlog_cache_size current value: 4294967295 max_connections current value: 100 max_connect_errors current value: 10 max_delayed_threads current value: 20 max_heap_table_size current value: 16777216 max_join_size current value: 4294967295 max_sort_length current value: 1024 max_tmp_tables current value: 32 max_write_lock_count current value: 4294967295 myisam_sort_buffer_size current value: 8388608 net_buffer_length current value: 16384 net_retry_count current value: 10 net_read_timeout current value: 30 net_write_timeout current value: 60 read_buffer_size current value: 131072 record_rnd_buffer_size current value: 131072 slow_launch_time current value: 2 sort_buffer current value: 2097116 table_cache current value: 64 thread_concurrency current value: 10 tmp_table_size current value: 1048576 thread_stack current value: 131072 wait_timeout current value: 28800
Please note that --set-variable is deprecated since MySQL 4.0,
just use --var=option on its own.
If there is a mysqld server currently running, you can see what
values it actually is using for the variables by executing this command:
shell> mysqladmin variables
You can find a full description for all variables in the SHOW VARIABLES
section in this manual. See section 4.5.7.4 SHOW VARIABLES.
You can also see some statistics from a running server by issuing the command
SHOW STATUS. See section 4.5.7.3 SHOW STATUS.
MySQL uses algorithms that are very scalable, so you can usually run with very little memory. If you, however, give MySQL more memory, you will normally also get better performance.
When tuning a MySQL server, the two most important variables to use
are key_buffer_size and table_cache. You should first feel
confident that you have these right before trying to change any of the
other variables.
If you have much memory (>=256M) and many tables and want maximum performance with a moderate number of clients, you should use something like this:
shell> safe_mysqld -O key_buffer=64M -O table_cache=256 \
-O sort_buffer=4M -O read_buffer_size=1M &
If you have only 128M and only a few tables, but you still do a lot of sorting, you can use something like:
shell> safe_mysqld -O key_buffer=16M -O sort_buffer=1M
If you have little memory and lots of connections, use something like this:
shell> safe_mysqld -O key_buffer=512k -O sort_buffer=100k \
-O read_buffer_size=100k &
or even:
shell> safe_mysqld -O key_buffer=512k -O sort_buffer=16k \
-O table_cache=32 -O read_buffer_size=8k -O net_buffer_length=1K &
If you are doing a GROUP BY or ORDER BY on files that are
much bigger than your available memory you should increase the value of
record_rnd_buffer to speed up the reading of rows after the sorting
is done.
When you have installed MySQL, the `support-files' directory will contain some different `my.cnf' example files, `my-huge.cnf', `my-large.cnf', `my-medium.cnf', and `my-small.cnf', you can use as a base to optimise your system.
If there are very many connections, ``swapping problems'' may occur unless
mysqld has been configured to use very little memory for each
connection. mysqld performs better if you have enough memory for all
connections, of course.
Note that if you change an option to mysqld, it remains in effect only
for that instance of the server.
To see the effects of a parameter change, do something like this:
shell> mysqld -O key_buffer=32m --help
Make sure that the --help option is last; otherwise, the effect of any
options listed after it on the command-line will not be reflected in the
output.
Most of the following tests are done on Linux with the MySQL benchmarks, but they should give some indication for other operating systems and workloads.
You get the fastest executable when you link with -static.
On Linux, you will get the fastest code when compiling with pgcc
and -O3. To compile `sql_yacc.cc' with these options, you
need about 200M memory because gcc/pgcc needs a lot of memory to
make all functions inline. You should also set CXX=gcc when
configuring MySQL to avoid inclusion of the libstdc++
library (it is not needed). Note that with some versions of pgcc,
the resulting code will only run on true Pentium processors, even if you
use the compiler option that you want the resulting code to be working on
all x586 type processors (like AMD).
By just using a better compiler and/or better compiler options you can get a 10-30% speed increase in your application. This is particularly important if you compile the SQL server yourself!
We have tested both the Cygnus CodeFusion and Fujitsu compilers, but when we tested them, neither was sufficiently bug free to allow MySQL to be compiled with optimisations on.
When you compile MySQL you should only include support for the
character sets that you are going to use. (Option --with-charset=xxx.)
The standard MySQL binary distributions are compiled with support
for all character sets.
Here is a list of some measurements that we have done:
pgcc and compile everything with -O6, the
mysqld server is 1% faster than with gcc 2.95.2.
-static), the result is 13%
slower on Linux. Note that you still can use a dynamic linked
MySQL library. It is only the server that is critical for
performance.
mysqld binary with strip libexec/mysqld,
the resulting binary can be up to 4% faster.
localhost,
MySQL will, by default, use sockets.)
--with-debug=full, then you will lose 20%
for most queries, but some queries may take substantially longer (The
MySQL benchmarks ran 35% slower)
If you use --with-debug, then you will only lose 15%.
By starting a mysqld version compiled with --with-debug=full
with --skip-safemalloc the end result should be close to when
configuring with --with-debug.
gcc 3.2
gcc 2.95.2 for UltraSPARC with the option
-mcpu=v8 -Wa,-xarch=v8plusa gives 4% more performance.
--log-bin makes mysqld 1% slower.
-fomit-frame-pointer or -fomit-frame-pointer -ffixed-ebp
makes mysqld 1-4% faster.
The MySQL-Linux distribution provided by MySQL AB used
to be compiled with pgcc, but we had to go back to regular gcc
because of a bug in pgcc that would generate the code that does
not run on AMD. We will continue using gcc until that bug is resolved.
In the meantime, if you have a non-AMD machine, you can get a faster
binary by compiling with pgcc. The standard MySQL
Linux binary is linked statically to get it faster and more portable.
The following list indicates some of the ways that the mysqld server
uses memory. Where applicable, the name of the server variable relevant
to the memory use is given:
key_buffer_size) is shared by all
threads; other buffers used by the server are allocated as
needed. See section 5.5.2 Tuning Server Parameters.
thread_stack), a connection buffer (variable
net_buffer_length), and a result buffer (variable
net_buffer_length). The connection buffer and result buffer are
dynamically enlarged up to max_allowed_packet when needed. When
a query is running, a copy of the current query string is also allocated.
ISAM / MyISAM tables are memory mapped. This
is because the 32-bit memory space of 4 GB is not large enough for most
big tables. When systems with a 64-bit address space become more
common we may add general support for memory mapping.
record_buffer).
record_rnd_buffer).
HEAP)
tables. Temporary tables with a big record length (calculated as the
sum of all column lengths) or that contain BLOB columns are
stored on disk.
One problem in MySQL versions before Version 3.23.2 is that if a HEAP
table exceeds the size of tmp_table_size, you get the error The
table tbl_name is full. In newer versions this is handled by
automatically changing the in-memory (HEAP) table to a disk-based
(MyISAM) table as necessary. To work around this problem, you can
increase the temporary table size by setting the tmp_table_size
option to mysqld, or by setting the SQL option
BIG_TABLES in the client program. See section 5.5.6 SET Syntax. In MySQL Version 3.20, the maximum size of the
temporary table was record_buffer*16, so if you are using this
version, you have to increase the value of record_buffer. You can
also start mysqld with the --big-tables option to always
store temporary tables on disk. However, this will affect the speed of
many complicated queries.
malloc() and
free()).
3 * n is
allocated (where n is the maximum row length, not counting BLOB
columns). A BLOB uses 5 to 8 bytes plus the length of the BLOB
data. The ISAM/MyISAM storage engines will use one extra row
buffer for internal usage.
BLOB columns, a buffer is enlarged dynamically
to read in larger BLOB values. If you scan a table, a buffer as large
as the largest BLOB value is allocated.
mysqladmin flush-tables command closes all tables that are not in
use and marks all in-use tables to be closed when the currently executing
thread finishes. This will effectively free most in-use memory.
ps and other system status programs may report that mysqld
uses a lot of memory. This may be caused by thread-stacks on different
memory addresses. For example, the Solaris version of ps counts
the unused memory between stacks as used memory. You can verify this by
checking available swap with swap -s. We have tested
mysqld with commercial memory-leakage detectors, so there should
be no memory leaks.
When a new thread connects to mysqld, mysqld will spawn a
new thread to handle the request. This thread will first check if the
hostname is in the hostname cache. If not the thread will call
gethostbyaddr_r() and gethostbyname_r() to resolve the
hostname.
If the operating system doesn't support the above thread-safe calls, the
thread will lock a mutex and call gethostbyaddr() and
gethostbyname() instead. Note that in this case no other thread
can resolve other hostnames that is not in the hostname cache until the
first thread is ready.
You can disable DNS host lookup by starting mysqld with
--skip-name-resolve. In this case you can however only use IP
names in the MySQL privilege tables.
If you have a very slow DNS and many hosts, you can get more performance by
either disabling DNS lookop with --skip-name-resolve or by
increasing the HOST_CACHE_SIZE define (default: 128) and recompile
mysqld.
You can disable the hostname cache with --skip-host-cache. You
can clear the hostname cache with FLUSH HOSTS or mysqladmin
flush-hosts.
If you don't want to allow connections over TCP/IP, you can do this
by starting mysqld with --skip-networking.
SET SyntaxSET [GLOBAL | SESSION] sql_variable=expression, [[GLOBAL | SESSION] sql_variable=expression...]
SET sets various options that affect the operation of the
server or your client.
The following examples shows the different syntaxes one can use to set variables:
In old MySQL versions we allowed the use of the SET OPTION syntax,
but this syntax is now deprecated.
In MySQL 4.0.3 we added the GLOBAL and SESSION options
and access to most important startup variables.
LOCAL can be used as a synonym for SESSION.
If you set several variables on the same command line, the last used
GLOBAL | SESSION mode is used.
SET sort_buffer_size=10000; SET @@local.sort_buffer_size=10000; SET GLOBAL sort_buffer_size=1000000, SESSION sort_buffer_size=1000000; SET @@sort_buffer_size=1000000; SET @@global.sort_buffer_size=1000000, @@local.sort_buffer_size=1000000;
The @@variable_name syntax is supported to make MySQL syntax
compatible with some other databases.
The different system variables one can set are described in the system variable section of this manual. See section 6.1.5 System Variables.
If you are using SESSION (the default) the option you set remains
in effect until the current session ends, or until you set the option to
a different value. If you use GLOBAL, which require the
SUPER privilege, the option is remembered and used for new
connections until the server restarts. If you want to make an option
permanent, you should set it in one of the MySQL option
files. See section 4.1.2 `my.cnf' Option Files.
To avoid wrong usage MySQL will give an error if you use SET
GLOBAL with a variable that can only be used with SET SESSION or if
you are not using SET GLOBAL with a global variable.
If you want to set a SESSION variable to the GLOBAL value or a
GLOBAL value to the MySQL default value, you can set it to
DEFAULT.
SET max_join_size=DEFAULT;
This is identical to:
SET @@session.max_join_size=@@global.max_join_size;
If you want to restrict the maximum value a startup option can be set to
with the SET command, you can specify this by using the
--maximum-variable-name command line option. See section 4.1.1 mysqld Command-line Options.
You can get a list of most variables with SHOW VARIABLES.
See section 4.5.7.4 SHOW VARIABLES. You can get the value for a specific value with
the @@[global.|local.]variable_name syntax:
SHOW VARIABLES like "max_join_size"; SHOW GLOBAL VARIABLES like "max_join_size"; SELECT @@max_join_size, @@global.max_join_size;
Here follows a description of the variables that uses a the variables
that uses a non-standard SET syntax and some of the other
variables. The other variable definitions can be found in the system
variable section, among the startup options or in the description of
SHOW VARIABLES. See section 6.1.5 System Variables. See section 4.1.1 mysqld Command-line Options. See section 4.5.7.4 SHOW VARIABLES.
CHARACTER SET character_set_name | DEFAULT
character_set_name is
cp1251_koi8, but you can easily add new mappings by editing the
`sql/convert.cc' file in the MySQL source distribution. The
default mapping can be restored by using a character_set_name value of
DEFAULT.
Note that the syntax for setting the CHARACTER SET option differs
from the syntax for setting the other options.
PASSWORD = PASSWORD('some password')
PASSWORD FOR user = PASSWORD('some password')
mysql database can do this. The user should be
given in user@hostname format, where user and hostname
are exactly as they are listed in the User and Host columns of
the mysql.user table entry. For example, if you had an entry with
User and Host fields of 'bob' and '%.loc.gov',
you would write:
mysql> SET PASSWORD FOR bob@"%.loc.gov" = PASSWORD("newpass");
Which is equivalent to:
mysql> UPDATE mysql.user SET password=PASSWORD("newpass")
-> WHERE user="bob" AND host="%.loc.gov";
SQL_AUTO_IS_NULL = 0 | 1
1 (default) then one can find the last inserted row
for a table with an AUTO_INCREMENT column with the following construct:
WHERE auto_increment_column IS NULL. This is used by some
ODBC programs like Access.
AUTOCOMMIT= 0 | 1
1 all changes to a table will be done at once. To start
a multi-command transaction, you have to use the BEGIN
statement. See section 6.7.1 BEGIN/COMMIT/ROLLBACK Syntax. If set to 0 you have to use COMMIT /
ROLLBACK to accept/revoke that transaction. See section 6.7.1 BEGIN/COMMIT/ROLLBACK Syntax. Note
that when you change from not AUTOCOMMIT mode to
AUTOCOMMIT mode, MySQL will do an automatic
COMMIT on any open transactions.
BIG_TABLES = 0 | 1
1, all temporary tables are stored on disk rather than in
memory. This will be a little slower, but you will not get the error
The table tbl_name is full for big SELECT operations that
require a large temporary table. The default value for a new connection is
0 (that is, use in-memory temporary tables).
This option was before named SQL_BIG_TABLES. In MySQL 4.0 you should
normally never need this flag as MySQL will automatically convert in memory
tables to disk based ones if need.
SQL_BIG_SELECTS = 0 | 1
0, MySQL will abort if a SELECT is attempted
that probably will take a very long time, which is defined as if the number
of examined rows is probably going to be bigger than MAX_JOIN_SIZE.
This is useful when an inadvisable WHERE statement has been
issued. A big query is defined as a SELECT that probably will
have to examine more than max_join_size rows. The default value
for a new connection is 1 (which will allow all SELECT
statements).
If you set MAX_JOIN_SIZE to another value than DEFAULT
SQL_BIG_SELECTS will be set to 0.
SQL_BUFFER_RESULT = 0 | 1
SQL_BUFFER_RESULT will force the result from SELECTs
to be put into a temporary table. This will help MySQL free the
table locks early and will help in cases where it takes a long time to
send the result set to the client.
LOW_PRIORITY_UPDATES = 0 | 1
1, all INSERT, UPDATE, DELETE, and
LOCK TABLE WRITE statements wait until there is no pending
SELECT or LOCK TABLE READ on the affected table.
This option was before named SQL_LOW_PRIORITY_UPDATES.
MAX_JOIN_SIZE = value | DEFAULT
SELECTs that will probably need to examine more than
value row combinations or is likely to do more than value
disk seeks. By setting this value, you can catch SELECTs where
keys are not used properly and that would probably take a long
time. Setting this to a value other than DEFAULT will reset the
SQL_BIG_SELECTS flag. If you set the SQL_BIG_SELECTS flag
again, the SQL_MAX_JOIN_SIZE variable will be ignored. You can
set a default value for this variable by starting mysqld with
-O max_join_size=#. This option was before named
SQL_MAX_JOIN_SIZE.
Note that if the result of the query is already in the query cache, the
above check will not be made. Instead, MySQL will send the result to the
client. Since the query result is already computed and it will not burden
the server to send the result to the client.
QUERY_CACHE_TYPE = OFF | ON | DEMAND
QUERY_CACHE_TYPE = 0 | 1 | 2
| Option | Description |
| 0 or OFF | Don't cache or retrieve results. |
| 1 or ON | Cache all results except SELECT SQL_NO_CACHE ... queries.
|
| 2 or DEMAND | Cache only SELECT SQL_CACHE ... queries.
|
SQL_SAFE_UPDATES = 0 | 1
1, MySQL will abort if an UPDATE or
DELETE is attempted that doesn't use a key or LIMIT in the
WHERE clause. This makes it possible to catch wrong updates
when creating SQL commands by hand.
SQL_SELECT_LIMIT = value | DEFAULT
SELECT statements. If
a SELECT has a LIMIT clause, the LIMIT takes precedence
over the value of SQL_SELECT_LIMIT. The default value for a new
connection is ``unlimited.'' If you have changed the limit, the default value
can be restored by using a SQL_SELECT_LIMIT value of DEFAULT.
SQL_LOG_OFF = 0 | 1
1, no logging will be done to the standard log for this
client, if the client has the SUPER privilege. This affects
neither the update log nor the binary log!
SQL_LOG_UPDATE = 0 | 1
0, no logging will be done to the update log for the client,
if the client has the SUPER privilege. This does not affect the
standard log! This variable is deprecated starting from version 5.0.
SQL_QUOTE_SHOW_CREATE = 0 | 1
1, SHOW CREATE TABLE will quote
table and column names. This is on by default,
for replication of tables with fancy column names to work.
section 4.5.7.8 SHOW CREATE TABLE.
TIMESTAMP = timestamp_value | DEFAULT
timestamp_value should be a
Unix epoch timestamp, not a MySQL timestamp.
LAST_INSERT_ID = #
LAST_INSERT_ID(). This is stored in
the binary log when you use LAST_INSERT_ID() in a command that updates
a table.
INSERT_ID = #
INSERT or ALTER TABLE
command when inserting an AUTO_INCREMENT value. This is mainly used
with the binary log.
hdparm to configure your disk's interface! The
following should be quite good hdparm options for MySQL (and
probably many other applications):
hdparm -m 16 -d 1Note that the performance/reliability when using the above depends on your hardware, so we strongly suggest that you test your system thoroughly after using
hdparm! Please consult the hdparm
man page for more information! If hdparm is not used wisely,
filesystem corruption may result. Backup everything before experimenting!
-o async
option to set the filesystem to be updated asynchronously. If your computer is
reasonably stable, this should give you more performance without sacrificing
too much reliability. (This flag is on by default on Linux.)
-o noatime option.
You can move tables and databases from the database directory to other locations and replace them with symbolic links to the new locations. You might want to do this, for example, to move a database to a file system with more free space or increase the speed of your system by spreading your tables to different disk.
The recommended way to do this, is to just symlink databases to a different disk and only symlink tables as a last resort.
The way to symlink a database is to first create a directory on some disk where you have free space and then create a symlink to it from the MySQL database directory.
shell> mkdir /dr1/databases/test shell> ln -s /dr1/databases/test mysqld-datadir
MySQL doesn't support that you link one directory to multiple
databases. Replacing a database directory with a symbolic link will
work fine as long as you don't make a symbolic link between databases.
Suppose you have a database db1 under the MySQL data
directory, and then make a symlink db2 that points to db1:
shell> cd /path/to/datadir shell> ln -s db1 db2
Now, for any table tbl_a in db1, there also appears to be
a table tbl_a in db2. If one thread updates db1.tbl_a
and another thread updates db2.tbl_a, there will be problems.
If you really need this, you must change the following code in `mysys/mf_format.c':
if (flag & 32 || (!lstat(to,&stat_buff) && S_ISLNK(stat_buff.st_mode)))
to
if (1)
On Windows you can use internal symbolic links to directories by compiling
MySQL with -DUSE_SYMDIR. This allows you to put different
databases on different disks. See section 2.6.2.5 Splitting Data Across Different Disks on Windows.
Before MySQL 4.0 you should not symlink tables, if you are not
very careful with them. The problem is that if you run ALTER
TABLE, REPAIR TABLE or OPTIMIZE TABLE on a symlinked
table, the symlinks will be removed and replaced by the original
files. This happens because the above command works by creating a
temporary file in the database directory and when the command is
complete, replace the original file with the temporary file.
You should not symlink tables on systems that don't have a fully
working realpath() call. (At least Linux and Solaris support
realpath())
In MySQL 4.0 symlinks are fully supported only for MyISAM
tables. For other table types you will probably get strange problems
when doing any of the above mentioned commands.
The handling of symbolic links in MySQL 4.0 works the following
way (this is mostly relevant only for MyISAM tables).
mysqld is
not running) or with the INDEX/DATA DIRECTORY="path-to-dir" command
in CREATE TABLE. See section 6.5.3 CREATE TABLE Syntax.
myisamchk will not replace a symlink with the data or index file but
work directly on the file the symlink points to. Any temporary files
will be created in the same directory where the data or index file is
located.
mysqld as root or allow
persons to have write access to the MySQL database directories.
ALTER TABLE RENAME and you don't move
the table to another database, the symlinks in the database directory
will be renamed to the new names and the data and index files will be
renamed accordingly.
ALTER TABLE RENAME to move a table to another database,
the table will be moved to the other database directory and the old
symlinks and the files they pointed to will be deleted. (In other words,
the new table will not be symlinked.)
--skip-symlink
option to mysqld to ensure that no one can drop or rename a file
outside of the mysqld data directory.
Things that are not yet supported:
ALTER TABLE ignores all INDEX/DATA DIRECTORY="path" options.
CREATE TABLE doesn't report if the table has symbolic links.
mysqldump doesn't include the symbolic link information in the output.
BACKUP TABLE and RESTORE TABLE don't respect symbolic links.
MySQL has a very complex, but intuitive and easy to learn SQL interface. This chapter describes the various commands, types, and functions you will need to know in order to use MySQL efficiently and effectively. This chapter also serves as a reference to all functionality included in MySQL. In order to use this chapter effectively, you may find it useful to refer to the various indexes.
This section describes the various ways to write strings and numbers in MySQL. It also covers the various nuances and ``gotchas'' that you may run into when dealing with these basic types in MySQL.
A string is a sequence of characters, surrounded by either single quote (`'') or double quote (`"') characters (only the single quote if you run in ANSI mode). Examples:
'a string' "another string"
Within a string, certain sequences have special meaning. Each of these sequences begins with a backslash (`\'), known as the escape character. MySQL recognises the following escape sequences:
\0
NUL) character.
\'
\"
\b
\n
\r
\t
\z
mysql database < filename.)
\\
\%
\_
Note that if you use `\%' or `\_' in some string contexts, these will return the strings `\%' and `\_' and not `%' and `_'.
There are several ways to include quotes within a string:
The SELECT statements shown here demonstrate how quoting and
escaping work:
mysql> SELECT 'hello', '"hello"', '""hello""', 'hel''lo', '\'hello'; +-------+---------+-----------+--------+--------+ | hello | "hello" | ""hello"" | hel'lo | 'hello | +-------+---------+-----------+--------+--------+ mysql> SELECT "hello", "'hello'", "''hello''", "hel""lo", "\"hello"; +-------+---------+-----------+--------+--------+ | hello | 'hello' | ''hello'' | hel"lo | "hello | +-------+---------+-----------+--------+--------+ mysql> SELECT "This\nIs\nFour\nlines"; +--------------------+ | This Is Four lines | +--------------------+
If you want to insert binary data into a string column (such as a
BLOB), the following characters must be represented by escape
sequences:
NUL
\
'
"
If you write C code, you can use the C API function
mysql_real_escape_string() to escape characters for the INSERT
statement. See section 8.1.2 C API Function Overview. In Perl, you can use the
quote method of the DBI package to convert special
characters to the proper escape sequences. See section 8.5.2 The DBI Interface.
You should use an escape function on any string that might contain any of the special characters listed above!
Alternatively, many MySQL APIs provide some sort of placeholder capability that allows you to insert special markers into a query string, and then bind data values to them when you issue the query. In this case, the API takes case of escaping special characters in the values for you automatically.
Integers are represented as a sequence of digits. Floats use `.' as a decimal separator. Either type of number may be preceded by `-' to indicate a negative value.
Examples of valid integers:
1221 0 -32
Examples of valid floating-point numbers:
294.42 -32032.6809e+10 148.00
An integer may be used in a floating-point context; it is interpreted as the equivalent floating-point number.
MySQL supports hexadecimal values. In numeric context these act like an integer (64-bit precision). In string context these act like a binary string where each pair of hex digits is converted to a character:
mysql> SELECT x'4D7953514C';
-> MySQL
mysql> SELECT 0xa+0;
-> 10
mysql> SELECT 0x5061756c;
-> Paul
In MySQL 4.1 (and in MySQL 4.0 when using the --new option) the
default type of of a hexadecimal value is a string. If you want to be
sure that the string is threated as a number, you can use
CAST( ... AS UNSIGNED) on the hexadecimal value.
The x'hexstring' syntax (new in 4.0) is based on standard SQL and the
0x syntax is based on ODBC. Hexadecimal strings are often used by
ODBC to supply values for BLOB columns.
You can convert a string or a number to string in hexadecimal format with
the HEX() function.
NULL Values
The NULL value means ``no data'' and is different from values such
as 0 for numeric types or the empty string for string types.
See section A.5.3 Problems with NULL Values.
NULL may be represented by \N when using the text file import
or export formats (LOAD DATA INFILE, SELECT ... INTO OUTFILE).
See section 6.4.9 LOAD DATA INFILE Syntax.
Database, table, index, column, and alias names all follow the same rules in MySQL.
Note that the rules changed starting with MySQL Version 3.23.6 when we introduced quoting of identifiers (database, table, and column names) with ``'. `"' will also work to quote identifiers if you run in ANSI mode. See section 1.8.2 Running MySQL in ANSI Mode.
| Identifier | Max length | Allowed characters |
| Database | 64 | Any character that is allowed in a directory name except `/', `\' or `.'. |
| Table | 64 | Any character that is allowed in a file name, except `/' or `.'. |
| Column | 64 | All characters. |
| Alias | 255 | All characters. |
Note that in addition to the above, you can't have ASCII(0) or ASCII(255) or the quoting character in an identifier.
Note that if the identifier is a restricted word or contains special characters
you must always quote it with a ` (backtick) when you use it:
mysql> SELECT * FROM `select` WHERE `select`.id > 100;
See section 6.1.7 Is MySQL Picky About Reserved Words?.
In MySQL versions prior to 3.23.6, the name rules are as follows:
--default-character-set option
to mysqld.
See section 4.6.1 The Character Set Used for Data and Sorting.
It is recommended that you do not use names like 1e, because
an expression like 1e+1 is ambiguous. It may be interpreted as the
expression 1e + 1 or as the number 1e+1.
In MySQL you can refer to a column using any of the following forms:
| Column reference | Meaning |
col_name | Column col_name
from whichever table used in the query contains a column of that name.
|
tbl_name.col_name | Column col_name from table
tbl_name of the current database.
|
db_name.tbl_name.col_name | Column col_name from table
tbl_name of the database db_name. This form is available in
MySQL Version 3.22 or later.
|
`column_name` | A column that is a keyword or contains special characters. |
You need not specify a tbl_name or db_name.tbl_name prefix for
a column reference in a statement unless the reference would be ambiguous.
For example, suppose tables t1 and t2 each contain a column
c, and you retrieve c in a SELECT statement that uses
both t1 and t2. In this case, c is ambiguous because it
is not unique among the tables used in the statement, so you must indicate
which table you mean by writing t1.c or t2.c. Similarly, if
you are retrieving from a table t in database db1 and from a
table t in database db2, you must refer to columns in those
tables as db1.t.col_name and db2.t.col_name.
The syntax .tbl_name means the table tbl_name in the current
database. This syntax is accepted for ODBC compatibility, because some ODBC
programs prefix table names with a `.' character.
In MySQL, databases and tables correspond to directories and files within those directories. Consequently, the case-sensitivity of the underlying operating system determines the case-sensitivity of database and table names. This means database and table names are case-insensitive in Windows, and case-sensitive in most varieties of Unix. One prominent exception here is Mac OS X, when the default HFS+ file system is being used. However Mac OS X also supports UFS volumes, those are case sensitive on Mac OS X just like they are on any Unix. See section 1.8.3 MySQL Extensions To The SQL-92 Standard.
Note: although database and table names are case-insensitive for
Windows, you should not refer to a given database or table using different
cases within the same query. The following query would not work because it
refers to a table both as my_table and as MY_TABLE:
mysql> SELECT * FROM my_table WHERE MY_TABLE.col=1;
Column names and column aliases are case-insensitive in all cases.
Aliases on tables are case-sensitive. The following query would not work
because it refers to the alias both as a and as A:
mysql> SELECT col_name FROM tbl_name AS a
-> WHERE a.col_name = 1 OR A.col_name = 2;
If you have trouble remembering the lettercase for database and table names, adopt a consistent convention, such as always creating databases and tables using lowercase names.
One way to avoid this problem is to start mysqld with -O
lower_case_table_names=1. By default this option is 1 on Windows and 0 on
Unix.
If lower_case_table_names is 1 MySQL will convert all
table names to lowercase on storage and lookup.
(From version 4.0.2, this option also applies to database names. From
4.1.1 this also applies for table alias).
Note that if you change this option, you need to first convert your old
table names to lower case before starting mysqld.
If you move MyISAM files from a Windows to a Unix disk, you may
in some cases need to use the `mysql_fix_extensions' tool to fix-up
the case of the file extensions in each specified database directory
(lowercase `.frm', uppercase `.MYI' and `.MYD').
`mysql_fix_extensions' can be found in the `scripts' subdirectory.
MySQL supports connection-specific user variables with the
@variablename syntax. A variable name may consist of
alphanumeric characters from the current character set and also
`_', `$', and `.' . The default character set is
ISO-8859-1 Latin1; this may be changed with the
--default-character-set option to mysqld. See section 4.6.1 The Character Set Used for Data and Sorting. User variable names are case insensitive in versions >= 5.0, case
sensitive in versions < 5.0.
Variables don't have to be initialised. They contain NULL by default
and can store an integer, real, or string value. All variables for
a thread are automatically freed when the thread exits.
You can set a variable with the SET syntax:
SET @variable= { integer expression | real expression | string expression }
[,@variable= ...].
You can also assign a value to a variable in statements other than SET.
However, in this case the assignment operator is := rather than
=, because = is reserved for comparisons in non-SET
statements:
mysql> SELECT @t1:=(@t2:=1)+@t3:=4,@t1,@t2,@t3; +----------------------+------+------+------+ | @t1:=(@t2:=1)+@t3:=4 | @t1 | @t2 | @t3 | +----------------------+------+------+------+ | 5 | 5 | 1 | 4 | +----------------------+------+------+------+
User variables may be used where expressions are allowed. Note that
this does not currently include contexts where a number is explicitly
required, such as in the LIMIT clause of a SELECT statement,
or the IGNORE number LINES clause of a LOAD DATA statement.
Note: in a SELECT statement, each expression is evaluated
only when it's sent to the client. This means that in the HAVING,
GROUP BY, or ORDER BY clause, you can't refer to an expression
that involves variables that are set in the SELECT part. For example,
the following statement will NOT work as expected:
mysql> SELECT (@aa:=id) AS a, (@aa+3) AS b FROM table_name HAVING b=5;
The reason is that @aa will not contain the value of the current
row, but the value of id for the previous accepted row.
The rule is to never assign and use the same variable in the same statement.
Starting from MySQL 4.0.3 we provide better access to a lot of system and connection variables. One can change most of them without having to take down the server.
There are two kind of system variables: Thread-specific (or connection-specific) variables that are unique to the current connection and global variables that are used to configure global events. Global variables also are used to set up the initial values of the corresponding thread-specific variables for new connections.
When mysqld starts, all global variables are initialised from command
line arguments and option files. You can change the value with the
SET GLOBAL command. When a new thread is created, the thread-specific
variables are initialised from the global variables and they
will not change even if you issue a new SET GLOBAL command.
To set the value for a GLOBAL variable, you should use one
of the following syntaxes:
(Here we use sort_buffer_size as an example variable)
SET GLOBAL sort_buffer_size=value; SET @@global.sort_buffer_size=value;
To set the value for a SESSION variable, you can use one of the
following syntaxes:
SET SESSION sort_buffer_size=value; SET @@session.sort_buffer_size=value; SET sort_buffer_size=value;
If you don't specify GLOBAL or SESSION then SESSION
is used. See section 5.5.6 SET Syntax.
LOCAL is a synonym for SESSION.
To retrieve the value for a GLOBAL variable you can use one of the
following commands:
SELECT @@global.sort_buffer_size; SHOW GLOBAL VARIABLES like 'sort_buffer_size';
To retrieve the value for a SESSION variable you can use one of the
following commands:
SELECT @@session.sort_buffer_size; SHOW SESSION VARIABLES like 'sort_buffer_size';
When you retrieve a variable value with the
@@variable_name syntax and you don't specify GLOBAL or
SESSION then MySQL will return the thread-specific
(SESSION) value if it exists. If not, MySQL will return the
global value.
The reason for requiring GLOBAL for setting GLOBAL only
variables but not for retrieving them is to ensure that we don't later
run into problems if we later would introduce a thread-specific variable
with the same name or remove a thread-specific variable. In this case,
you could accidentally change the state for the server as a whole, rather than
just for your own connection.
The following is a full list of all variables that you change and retrieve
and if you can use GLOBAL or SESSION with them.
| Variable name | Value type | Type |
| autocommit | bool | SESSION |
| big_tables | bool | SESSION |
| binlog_cache_size | num | GLOBAL |
| bulk_insert_buffer_size | num | GLOBAL | SESSION |
| concurrent_insert | bool | GLOBAL |
| connect_timeout | num | GLOBAL |
| convert_character_set | string | SESSION |
| delay_key_write | OFF | ON | ALL | GLOBAL |
| delayed_insert_limit | num | GLOBAL |
| delayed_insert_timeout | num | GLOBAL |
| delayed_queue_size | num | GLOBAL |
| error_count | num | LOCAL |
| flush | bool | GLOBAL |
| flush_time | num | GLOBAL |
| foreign_key_checks | bool | SESSION |
| identity | num | SESSION |
| insert_id | bool | SESSION |
| interactive_timeout | num | GLOBAL | SESSION |
| join_buffer_size | num | GLOBAL | SESSION |
| key_buffer_size | num | GLOBAL |
| last_insert_id | bool | SESSION |
| local_infile | bool | GLOBAL |
| log_warnings | bool | GLOBAL |
| long_query_time | num | GLOBAL | SESSION |
| low_priority_updates | bool | GLOBAL | SESSION |
| max_allowed_packet | num | GLOBAL | SESSION |
| max_binlog_cache_size | num | GLOBAL |
| max_binlog_size | num | GLOBAL |
| max_connect_errors | num | GLOBAL |
| max_connections | num | GLOBAL |
| max_error_count | num | GLOBAL | SESSION |
| max_delayed_threads | num | GLOBAL |
| max_heap_table_size | num | GLOBAL | SESSION |
| max_join_size | num | GLOBAL | SESSION |
| max_sort_length | num | GLOBAL | SESSION |
| max_tmp_tables | num | GLOBAL |
| max_user_connections | num | GLOBAL |
| max_write_lock_count | num | GLOBAL |
| myisam_max_extra_sort_file_size | num | GLOBAL | SESSION |
| myisam_repair_threads | num | GLOBAL | SESSION |
| myisam_max_sort_file_size | num | GLOBAL | SESSION |
| myisam_sort_buffer_size | num | GLOBAL | SESSION |
| net_buffer_length | num | GLOBAL | SESSION |
| net_read_timeout | num | GLOBAL | SESSION |
| net_retry_count | num | GLOBAL | SESSION |
| net_write_timeout | num | GLOBAL | SESSION |
| query_cache_limit | num | GLOBAL |
| query_cache_size | num | GLOBAL |
| query_cache_type | enum | GLOBAL |
| read_buffer_size | num | GLOBAL | SESSION |
| read_rnd_buffer_size | num | GLOBAL | SESSION |
| rpl_recovery_rank | num | GLOBAL |
| safe_show_database | bool | GLOBAL |
| server_id | num | GLOBAL |
| slave_compressed_protocol | bool | GLOBAL |
| slave_net_timeout | num | GLOBAL |
| slow_launch_time | num | GLOBAL |
| sort_buffer_size | num | GLOBAL | SESSION |
| sql_auto_is_null | bool | SESSION |
| sql_big_selects | bool | SESSION |
| sql_big_tables | bool | SESSION |
| sql_buffer_result | bool | SESSION |
| sql_log_binlog | bool | SESSION |
| sql_log_off | bool | SESSION |
| sql_log_update | bool | SESSION |
| sql_low_priority_updates | bool | GLOBAL | SESSION |
| sql_max_join_size | num | GLOBAL | SESSION |
| sql_quote_show_create | bool | SESSION |
| sql_safe_updates | bool | SESSION |
| sql_select_limit | bool | SESSION |
| sql_slave_skip_counter | num | GLOBAL |
| sql_warnings | bool | SESSION |
| table_cache | num | GLOBAL |
| table_type | enum | GLOBAL | SESSION |
| thread_cache_size | num | GLOBAL |
| timestamp | bool | SESSION |
| tmp_table_size | enum | GLOBAL | SESSION |
| tx_isolation | enum | GLOBAL | SESSION |
| version | string | GLOBAL |
| wait_timeout | num | GLOBAL | SESSION |
| warning_count | num | LOCAL |
| unique_checks | bool | SESSION |
Variables that are marked with num can be given a numerical
value. Variables that are marked with bool can be set to 0, 1,
ON or OFF. Variables that are of type enum should
normally be set to one of the available values for the variable, but can
also be set to the number that correspond to the enum value. (The first
enum value is 0).
Here is a description of some of the variables:
| Variable | Description |
| identity | Alias for last_insert_id (Sybase compatiblity) |
| sql_low_priority_updates | Alias for low_priority_updates |
| sql_max_join_size | Alias for max_join_size |
| delay_key_write_for_all_tables | If this and delay_key_write are set, then all new MyISAM tables that are opened will use delayed key writes. |
| version | Alias for VERSION() (Sybase (?) compatability) |
A description of the other variable definitions can be found in the
startup options section, the description of SHOW VARIABLES and in
the SET section. See section 4.1.1 mysqld Command-line Options. See section 4.5.7.4 SHOW VARIABLES. See section 5.5.6 SET Syntax.
The MySQL server supports the # to end of line, --
to end of line and /* in-line or multiple-line */ comment
styles:
mysql> SELECT 1+1; # This comment continues to the end of line mysql> SELECT 1+1; -- This comment continues to the end of line mysql> SELECT 1 /* this is an in-line comment */ + 1; mysql> SELECT 1+ /* this is a multiple-line comment */ 1;
Note that the -- (double-dash) comment style requires you to have at
least one space after the second dash!
Although the server understands the comment syntax just described,
there are some limitations on the way that the mysql client
parses /* ... */ comments:
mysql interactively, you can tell that it
has gotten confused like this because the prompt changes from mysql>
to '> or ">.
These limitations apply both when you run mysql interactively
and when you put commands in a file and tell mysql to read its
input from that file with mysql < some-file.
MySQL supports the `--' SQL-99 comment style only if the second dash is followed by a space. See section 1.8.4.7 `--' as the Start of a Comment.
A common problem stems from trying to create a table with column names that
use the names of datatypes or functions built into MySQL, such as
TIMESTAMP or GROUP. You're allowed to do it (for example,
ABS is an allowed column name), but whitespace is not allowed between
a function name and the immediately following `(' when using functions
whose names are also column names.
The following words are explicitly reserved in MySQL. Most of
them are forbidden by SQL-92 as column and/or table names
(for example, GROUP).
A few are reserved because MySQL needs them and is
(currently) using a yacc parser:
| Word | Word | Word |
ADD
| ALL
| ALTER
|
ANALYZE
| AND
| AS
|
ASC
| ASENSITIVE
| AUTO_INCREMENT
|
BDB
| BEFORE
| BERKELEYDB
|
BETWEEN
| BIGINT
| BINARY
|
BLOB
| BOTH
| BTREE
|
BY
| CALL
| CASCADE
|
CASE
| CHANGE
| CHAR
|
CHARACTER
| CHECK
| COLLATE
|
COLUMN
| COLUMNS
| CONNECTION
|
CONSTRAINT
| CREATE
| CROSS
|
CURRENT_DATE
| CURRENT_TIME
| CURRENT_TIMESTAMP
|
CURSOR
| DATABASE
| DATABASES
|
DAY_HOUR
| DAY_MINUTE
| DAY_SECOND
|
DEC
| DECIMAL
| DECLARE
|
DEFAULT
| DELAYED
| DELETE
|
DESC
| DESCRIBE
| DISTINCT
|
DISTINCTROW
| DIV
| DOUBLE
|
DROP
| ELSE
| ELSEIF
|
ENCLOSED
| ERRORS
| ESCAPED
|
EXISTS
| EXPLAIN
| FALSE
|
FIELDS
| FLOAT
| FOR
|
FORCE
| FOREIGN
| FROM
|
FULLTEXT
| GRANT
| GROUP
|
HASH
| HAVING
| HIGH_PRIORITY
|
HOUR_MINUTE
| HOUR_SECOND
| IF
|
IGNORE
| IN
| INDEX
|
INFILE
| INNER
| INNODB
|
INOUT
| INSENSITIVE
| INSERT
|
INT
| INTEGER
| INTERVAL
|
INTO
| IS
| ITERATE
|
JOIN
| KEY
| KEYS
|
KILL
| LEADING
| LEAVE
|
LEFT
| LIKE
| LIMIT
|
LINES
| LOAD
| LOCALTIME
|
LOCALTIMESTAMP
| LOCK
| LONG
|
LONGBLOB
| LONGTEXT
| LOOP
|
LOW_PRIORITY
| MASTER_SERVER_ID
| MATCH
|
MEDIUMBLOB
| MEDIUMINT
| MEDIUMTEXT
|
MIDDLEINT
| MINUTE_SECOND
| MOD
|
MRG_MYISAM
| NATURAL
| NOT
|
NULL
| NUMERIC
| ON
|
OPTIMIZE
| OPTION
| OPTIONALLY
|
OR
| ORDER
| OUT
|
OUTER
| OUTFILE
| PRECISION
|
PRIMARY
| PRIVILEGES
| PROCEDURE
|
PURGE
| READ
| REAL
|
REFERENCES
| REGEXP
| RENAME
|
REPEAT
| REPLACE
| REQUIRE
|
RESTRICT
| RETURN
| RETURNS
|
REVOKE
| RIGHT
| RLIKE
|
RTREE
| SELECT
| SENSITIVE
|
SEPARATOR
| SET
| SHOW
|
SMALLINT
| SOME
| SONAME
|
SPATIAL
| SPECIFIC
| SQL_BIG_RESULT
|
SQL_CALC_FOUND_ROWS
| SQL_SMALL_RESULT
| SSL
|
STARTING
| STRAIGHT_JOIN
| STRIPED
|
TABLE
| TABLES
| TERMINATED
|
THEN
| TINYBLOB
| TINYINT
|
TINYTEXT
| TO
| TRAILING
|
TRUE
| TYPES
| UNION
|
UNIQUE
| UNLOCK
| UNSIGNED
|
UNTIL
| UPDATE
| USAGE
|
USE
| USER_RESOURCES
| USING
|
VALUES
| VARBINARY
| VARCHAR
|
VARCHARACTER
| VARYING
| WARNINGS
|
WHEN
| WHERE
| WHILE
|
WITH
| WRITE
| XOR
|
YEAR_MONTH
| ZEROFILL
|
The following symbols (from the table above) are disallowed by SQL-99 but allowed by MySQL as column/table names. This is because some of these names are very natural names and a lot of people have already used them.
ACTION
BIT
DATE
ENUM
NO
TEXT
TIME
TIMESTAMP
MySQL supports a number of column types, which may be grouped into three categories: numeric types, date and time types, and string (character) types. This section first gives an overview of the types available and summarises the storage requirements for each column type, then provides a more detailed description of the properties of the types in each category. The overview is intentionally brief. The more detailed descriptions should be consulted for additional information about particular column types, such as the allowable formats in which you can specify values.
The column types supported by MySQL are listed below. The following code letters are used in the descriptions:
M
D
M-2.
Square brackets (`[' and `]') indicate parts of type specifiers that are optional.
Note that if you specify ZEROFILL for a column, MySQL will
automatically add the UNSIGNED attribute to the column.
Warning: you should be aware that when you use subtraction
between integer values where one is of type UNSIGNED, the result
will be unsigned! See section 6.3.5 Cast Functions.
TINYINT[(M)] [UNSIGNED] [ZEROFILL]
-128 to 127. The
unsigned range is 0 to 255.
BIT
BOOL
TINYINT(1).
SMALLINT[(M)] [UNSIGNED] [ZEROFILL]
-32768 to 32767. The
unsigned range is 0 to 65535.
MEDIUMINT[(M)] [UNSIGNED] [ZEROFILL]
-8388608 to
8388607. The unsigned range is 0 to 16777215.
INT[(M)] [UNSIGNED] [ZEROFILL]
-2147483648 to
2147483647. The unsigned range is 0 to 4294967295.
INTEGER[(M)] [UNSIGNED] [ZEROFILL]
INT.
BIGINT[(M)] [UNSIGNED] [ZEROFILL]
-9223372036854775808 to
9223372036854775807. The unsigned range is 0 to
18446744073709551615.
Some things you should be aware of with respect to BIGINT columns:
BIGINT or DOUBLE
values, so you shouldn't use unsigned big integers larger than
9223372036854775807 (63 bits) except with bit functions! If you
do that, some of the last digits in the result may be wrong because of
rounding errors when converting the BIGINT to a DOUBLE.
MySQL 4.0 can handle BIGINT in the following cases:
BIGINT column.
MIN(big_int_column) and MAX(big_int_column).
+, -, *, etc.) where
both operands are integers.
BIGINT column by
storing it as a string. In this case, MySQL will perform a string-to-number
conversion that involves no intermediate double representation.
BIGINT arithmetic when
both arguments are integer values! This means that if you
multiply two big integers (or results from functions that return
integers) you may get unexpected results when the result is larger than
9223372036854775807.
FLOAT(precision) [UNSIGNED] [ZEROFILL]
precision can be
<=24 for a single-precision floating-point number and between 25
and 53 for a double-precision floating-point number. These types are like
the FLOAT and DOUBLE types described immediately below.
FLOAT(X) has the same range as the corresponding FLOAT and
DOUBLE types, but the display size and number of decimals are undefined.
In MySQL Version 3.23, this is a true floating-point value. In
earlier MySQL versions, FLOAT(precision) always has 2 decimals.
Note that using FLOAT may give you some unexpected problems as
all calculations in MySQL are done with double precision.
See section A.5.6 Solving Problems with No Matching Rows.
This syntax is provided for ODBC compatibility.
FLOAT[(M,D)] [UNSIGNED] [ZEROFILL]
-3.402823466E+38 to -1.175494351E-38, 0,
and 1.175494351E-38 to 3.402823466E+38. If
UNSIGNED is specified, negative values are disallowed. The M
is the display width and D is the number of decimals. FLOAT
without arguments or FLOAT(X) where X <= 24 stands for a
single-precision floating-point number.
DOUBLE[(M,D)] [UNSIGNED] [ZEROFILL]
-1.7976931348623157E+308 to
-2.2250738585072014E-308, 0, and
2.2250738585072014E-308 to 1.7976931348623157E+308. If
UNSIGNED is specified, negative values are disallowed. The
M is the display width and D is the number of decimals.
DOUBLE without arguments or FLOAT(X) where 25 <= X
<= 53 stands for a double-precision floating-point number.
DOUBLE PRECISION[(M,D)] [UNSIGNED] [ZEROFILL]
REAL[(M,D)] [UNSIGNED] [ZEROFILL]
DOUBLE.
DECIMAL[(M[,D])] [UNSIGNED] [ZEROFILL]
CHAR column: ``unpacked'' means the number is stored as a string,
using one character for each digit of the value. The decimal point and,
for negative numbers, the `-' sign, are not counted in M (but
space for these is reserved). If D is 0, values will have no decimal
point or fractional part. The maximum range of DECIMAL values is
the same as for DOUBLE, but the actual range for a given
DECIMAL column may be constrained by the choice of M and
D. If UNSIGNED is specified, negative values are disallowed.
If D is omitted, the default is 0. If M is omitted, the
default is 10.
Prior to MySQL Version 3.23, the M argument must include the space
needed for the sign and the decimal point.
DEC[(M[,D])] [UNSIGNED] [ZEROFILL]
NUMERIC[(M[,D])] [UNSIGNED] [ZEROFILL]
DECIMAL.
DATE
'1000-01-01' to '9999-12-31'.
MySQL displays DATE values in 'YYYY-MM-DD' format, but
allows you to assign values to DATE columns using either strings or
numbers. See section 6.2.2.2 The DATETIME, DATE, and TIMESTAMP Types.
DATETIME
'1000-01-01
00:00:00' to '9999-12-31 23:59:59'. MySQL displays
DATETIME values in 'YYYY-MM-DD HH:MM:SS' format, but allows you
to assign values to DATETIME columns using either strings or numbers.
See section 6.2.2.2 The DATETIME, DATE, and TIMESTAMP Types.
TIMESTAMP[(M)]
'1970-01-01 00:00:00' to sometime in the
year 2037.
In MySQL 4.0 and earlier, TIMESTAMP values are displayed in
YYYYMMDDHHMMSS, YYMMDDHHMMSS, YYYYMMDD, or YYMMDD
format, depending on whether M is 14 (or missing), 12,
8, or 6, but allows you to assign values to TIMESTAMP
columns using either strings or numbers.
From MySQL 4.1, TIMESTAMP is returned as string with the format
'YYYY-MM-DD HH:MM:DD'. If you want to have this as a number you
should add +0 to the timestamp column. Different timestamp lengths are
not supported. From version 4.0.12, the --new option can be used
to make the server behave as in version 4.1.
A TIMESTAMP column is useful
for recording the date and time of an INSERT or UPDATE
operation because it is automatically set to the date and time of the most
recent operation if you don't give it a value yourself. You can also set it
to the current date and time by assigning it a NULL value.
See section 6.2.2 Date and Time Types.
The M argument affects only how a TIMESTAMP column is displayed;
its values always are stored using 4 bytes each.
Note that TIMESTAMP(M) columns where M is 8 or 14 are reported to
be numbers while other TIMESTAMP(M) columns are reported to be
strings. This is just to ensure that one can reliably dump and restore
the table with these types!
See section 6.2.2.2 The DATETIME, DATE, and TIMESTAMP Types.
TIME
'-838:59:59' to '838:59:59'.
MySQL displays TIME values in 'HH:MM:SS' format, but
allows you to assign values to TIME columns using either strings or
numbers. See section 6.2.2.3 The TIME Type.
YEAR[(2|4)]
1901 to 2155, 0000 in the 4-digit year format,
and 1970-2069 if you use the 2-digit format (70-69). MySQL displays
YEAR values in YYYY format, but allows you to assign values to
YEAR columns using either strings or numbers. (The YEAR type is
unavailable prior to MySQL Version 3.22.) See section 6.2.2.4 The YEAR Type.
[NATIONAL] CHAR(M) [BINARY]
M is 0 to 255 characters
(1 to 255 prior to MySQL Version 3.23).
Trailing spaces are removed when the value is retrieved. CHAR values
are sorted and compared in case-insensitive fashion according to the
default character set unless the BINARY keyword is given.
NATIONAL CHAR (or its equivalent short form, NCHAR) is the
SQL-99 way to define that a CHAR column should use the default
CHARACTER set. This is the default in MySQL.
CHAR is a shorthand for CHARACTER.
MySQL allows you to create a column of type
CHAR(0). This is mainly useful when you have to be compliant with
some old applications that depend on the existence of a column but that do not
actually use the value. This is also quite nice when you need a
column that only can take 2 values: A CHAR(0), that is not defined
as NOT NULL, will occupy only one bit and can take only 2 values:
NULL or "". See section 6.2.3.1 The CHAR and VARCHAR Types.
CHAR
CHAR(1).
[NATIONAL] VARCHAR(M) [BINARY]
M is 0 to 255 characters (1 to 255 prior to MySQL Version 4.0.2).
VARCHAR values are sorted and compared in case-insensitive fashion
unless the BINARY keyword is given. See section 6.5.3.1 Silent Column Specification Changes.
VARCHAR is a shorthand for CHARACTER VARYING.
See section 6.2.3.1 The CHAR and VARCHAR Types.
TINYBLOB
TINYTEXT
BLOB or TEXT column with a maximum length of 255 (2^8 - 1)
characters. See section 6.5.3.1 Silent Column Specification Changes. See section 6.2.3.2 The BLOB and TEXT Types.
BLOB
TEXT
BLOB or TEXT column with a maximum length of 65535 (2^16 - 1)
characters. See section 6.5.3.1 Silent Column Specification Changes. See section 6.2.3.2 The BLOB and TEXT Types.
MEDIUMBLOB
MEDIUMTEXT
BLOB or TEXT column with a maximum length of 16777215
(2^24 - 1) characters. See section 6.5.3.1 Silent Column Specification Changes. See section 6.2.3.2 The BLOB and TEXT Types.
LONGBLOB
LONGTEXT
BLOB or TEXT column with a maximum length of 4294967295
(2^32 - 1) characters. See section 6.5.3.1 Silent Column Specification Changes. Note that because
the server/client protocol and MyISAM tables has currently a limit of
16M per communication packet / table row, you can't yet use this
the whole range of this type. See section 6.2.3.2 The BLOB and TEXT Types.
ENUM('value1','value2',...)
'value1', 'value2', ...,
NULL or the special "" error value. An ENUM can
have a maximum of 65535 distinct values. See section 6.2.3.3 The ENUM Type.
SET('value1','value2',...)
'value1', 'value2',
... A SET can have a maximum of 64 members. See section 6.2.3.4 The SET Type.
MySQL supports all of the SQL-92 numeric data types. These
types include the exact numeric data types (NUMERIC,
DECIMAL, INTEGER, and SMALLINT), as well as the
approximate numeric data types (FLOAT, REAL, and
DOUBLE PRECISION). The keyword INT is a synonym for
INTEGER, and the keyword DEC is a synonym for
DECIMAL.
The NUMERIC and DECIMAL types are implemented as the same
type by MySQL, as permitted by the SQL-92 standard. They are
used for values for which it is important to preserve exact precision,
for example with monetary data. When declaring a column of one of these
types the precision and scale can be (and usually is) specified; for
example:
salary DECIMAL(5,2)
In this example, 5 (precision) represents the number of
significant decimal digits that will be stored for values, and 2
(scale) represents the number of digits that will be stored
following the decimal point. In this case, therefore, the range of
values that can be stored in the salary column is from
-99.99 to 99.99.
(MySQL can actually store numbers up to 999.99 in this column
because it doesn't have to store the sign for positive numbers)
In SQL-92, the syntax DECIMAL(p) is equivalent to
DECIMAL(p,0). Similarly, the syntax DECIMAL is equivalent
to DECIMAL(p,0), where the implementation is allowed to decide
the value of p. MySQL does not currently support either of these
variant forms of the DECIMAL/NUMERIC data types. This is
not generally a serious problem, as the principal benefits of these
types derive from the ability to control both precision and scale
explicitly.
DECIMAL and NUMERIC values are stored as strings, rather
than as binary floating-point numbers, in order to preserve the decimal
precision of those values. One character is used for each digit of the
value, the decimal point (if scale > 0), and the `-' sign
(for negative numbers). If scale is 0, DECIMAL and
NUMERIC values contain no decimal point or fractional part.
The maximum range of DECIMAL and NUMERIC values is the
same as for DOUBLE, but the actual range for a given
DECIMAL or NUMERIC column can be constrained by the
precision or scale for a given column. When such a column
is assigned a value with more digits following the decimal point than
are allowed by the specified scale, the value is rounded to that
scale. When a DECIMAL or NUMERIC column is
assigned a value whose magnitude exceeds the range implied by the
specified (or defaulted) precision and scale,
MySQL stores the value representing the corresponding end
point of that range.
As an extension to the SQL-92 standard, MySQL also
supports the integer types TINYINT, MEDIUMINT, and
BIGINT as listed in the tables above. Another extension is
supported by MySQL for optionally specifying the display width
of an integer value in parentheses following the base keyword for the
type (for example, INT(4)). This optional width specification is
used to left-pad the display of values whose width is less than the
width specified for the column, but does not constrain the range of
values that can be stored in the column, nor the number of digits that
will be displayed for values whose width exceeds that specified for the
column. When used in conjunction with the optional extension attribute
ZEROFILL, the default padding of spaces is replaced with zeroes.
For example, for a column declared as INT(5) ZEROFILL, a value
of 4 is retrieved as 00004. Note that if you store larger
values than the display width in an integer column, you may experience
problems when MySQL generates temporary tables for some
complicated joins, as in these cases MySQL trusts that the
data did fit into the original column width.
All integer types can have an optional (non-standard) attribute
UNSIGNED. Unsigned values can be used when you want to allow
only positive numbers in a column and you need a little bigger numeric
range for the column.
As of MySQL 4.0.2, floating-point types also can be UNSIGNED.
As with integer types, this attribute prevents negative values from
being stored in the column. Unlike the integer types, the upper range
of column values remains the same.
The FLOAT type is used to represent approximate numeric data
types. The SQL-92 standard allows an optional specification of
the precision (but not the range of the exponent) in bits following the
keyword FLOAT in parentheses. The MySQL implementation
also supports this optional precision specification. When the keyword
FLOAT is used for a column type without a precision
specification, MySQL uses four bytes to store the values. A
variant syntax is also supported, with two numbers given in parentheses
following the FLOAT keyword. With this option, the first number
continues to represent the storage requirements for the value in bytes,
and the second number specifies the number of digits to be stored and
displayed following the decimal point (as with DECIMAL and
NUMERIC). When MySQL is asked to store a number for
such a column with more decimal digits following the decimal point than
specified for the column, the value is rounded to eliminate the extra
digits when the value is stored.
The REAL and DOUBLE PRECISION types do not accept
precision specifications. As an extension to the SQL-92
standard, MySQL recognises DOUBLE as a synonym for the
DOUBLE PRECISION type. In contrast with the standard's
requirement that the precision for REAL be smaller than that used
for DOUBLE PRECISION, MySQL implements both as 8-byte
double-precision floating-point values (when not running in ``ANSI mode'').
For maximum portability, code requiring storage of approximate numeric
data values should use FLOAT or DOUBLE PRECISION with no
specification of precision or number of decimal points.
When asked to store a value in a numeric column that is outside the column type's allowable range, MySQL clips the value to the appropriate endpoint of the range and stores the resulting value instead.
For example, the range of an INT column is -2147483648 to
2147483647. If you try to insert -9999999999 into an
INT column, the value is clipped to the lower endpoint of the range,
and -2147483648 is stored instead. Similarly, if you try to insert
9999999999, 2147483647 is stored instead.
If the INT column is UNSIGNED, the size of the column's
range is the same but its endpoints shift up to 0 and 4294967295.
If you try to store -9999999999 and 9999999999,
the values stored in the column become 0 and 4294967296.
Conversions that occur due to clipping are reported as ``warnings'' for
ALTER TABLE, LOAD DATA INFILE, UPDATE, and
multi-row INSERT statements.
| Type | Bytes | From | To |
TINYINT | 1 | -128 | 127 |
SMALLINT | 2 | -32768 | 32767 |
MEDIUMINT | 3 | -8388608 | 8388607 |
INT | 4 | -2147483648 | 2147483647 |
BIGINT | 8 | -9223372036854775808 | 9223372036854775807 |
The date and time types are DATETIME, DATE,
TIMESTAMP, TIME, and YEAR. Each of these has a
range of legal values, as well as a ``zero'' value that is used when you
specify a really illegal value. Note that MySQL allows you to store
certain 'not strictly' legal date values, for example 1999-11-31.
The reason for this is that we think it's the responsibility of the
application to handle date checking, not the SQL servers. To make the
date checking 'fast', MySQL only checks that the month is in
the range of 0-12 and the day is in the range of 0-31. The above ranges
are defined this way because MySQL allows you to store, in a
DATE or DATETIME column, dates where the day or month-day
is zero. This is extremely useful for applications that need to store
a birth-date for which you don't know the exact date. In this case you
simply store the date like 1999-00-00 or 1999-01-00. (You
cannot expect to get a correct value from functions like DATE_SUB()
or DATE_ADD for dates like these.)
Here are some general considerations to keep in mind when working with date and time types:
'98-09-04'), rather than
in the month-day-year or day-month-year orders commonly used elsewhere (for
example, '09-04-98', '04-09-98').
TIME values are clipped to
the appropriate endpoint of the TIME range.) The following table
shows the format of the ``zero'' value for each type:
| Column type | ``Zero'' value |
DATETIME | '0000-00-00 00:00:00'
|
DATE | '0000-00-00'
|
TIMESTAMP | 00000000000000 (length depends on display size)
|
TIME | '00:00:00'
|
YEAR | 0000
|
'0' or 0, which are easier to write.
MyODBC are converted
automatically to NULL in MyODBC Version 2.50.12 and above,
because ODBC can't handle such values.
MySQL itself is Y2K-safe (see section 1.2.5 Year 2000 Compliance), but input values presented to MySQL may not be. Any input containing 2-digit year values is ambiguous, because the century is unknown. Such values must be interpreted into 4-digit form because MySQL stores years internally using four digits.
For DATETIME, DATE, TIMESTAMP, and YEAR types,
MySQL interprets dates with ambiguous year values using the
following rules:
00-69 are converted to 2000-2069.
70-99 are converted to 1970-1999.
Remember that these rules provide only reasonable guesses as to what your data mean. If the heuristics used by MySQL don't produce the correct values, you should provide unambiguous input containing 4-digit year values.
ORDER BY will sort 2-digit YEAR/DATE/DATETIME types properly.
Note also that some functions like MIN() and MAX() will convert a
TIMESTAMP/DATE to a number. This means that a timestamp with a
2-digit year will not work properly with these functions. The fix in this
case is to convert the TIMESTAMP/DATE to 4-digit year format or
use something like MIN(DATE_ADD(timestamp,INTERVAL 0 DAYS)).
DATETIME, DATE, and TIMESTAMP Types
The DATETIME, DATE, and TIMESTAMP types are related.
This section describes their characteristics, how they are similar, and how
they differ.
The DATETIME type is used when you need values that contain both date
and time information. MySQL retrieves and displays DATETIME
values in 'YYYY-MM-DD HH:MM:SS' format. The supported range is
'1000-01-01 00:00:00' to '9999-12-31 23:59:59'. (``Supported''
means that although earlier values might work, there is no guarantee that
they will.)
The DATE type is used when you need only a date value, without a time
part. MySQL retrieves and displays DATE values in
'YYYY-MM-DD' format. The supported range is '1000-01-01' to
'9999-12-31'.
The TIMESTAMP column type provides a type that you can use to
automatically mark INSERT or UPDATE operations with the current
date and time. If you have multiple TIMESTAMP columns, only the first
one is updated automatically.
Automatic updating of the first TIMESTAMP column occurs under any of
the following conditions:
INSERT or
LOAD DATA INFILE statement.
UPDATE statement and some
other column changes value. (Note that an UPDATE that sets a column
to the value it already has will not cause the TIMESTAMP column to be
updated, because if you set a column to its current value, MySQL
ignores the update for efficiency.)
TIMESTAMP column to NULL.
TIMESTAMP columns other than the first may also be set to the current
date and time. Just set the column to NULL or to NOW().
You can set any TIMESTAMP column to a value different from the current
date and time by setting it explicitly to the desired value. This is true
even for the first TIMESTAMP column. You can use this property if,
for example, you want a TIMESTAMP to be set to the current date and
time when you create a row, but not to be changed whenever the row is updated
later:
TIMESTAMP column explicitly to its current value.
On the other hand, you may find it just as easy to use a DATETIME
column that you initialise to NOW() when the row is created and
leave alone for subsequent updates.
TIMESTAMP values may range from the beginning of 1970 to sometime in
the year 2037, with a resolution of one second. Values are displayed as
numbers.
The format in which MySQL retrieves and displays TIMESTAMP
values depends on the display size, as illustrated by the following table. The
`full' TIMESTAMP format is 14 digits, but TIMESTAMP columns may
be created with shorter display sizes:
| Column type | Display format |
TIMESTAMP(14) | YYYYMMDDHHMMSS
|
TIMESTAMP(12) | YYMMDDHHMMSS
|
TIMESTAMP(10) | YYMMDDHHMM
|
TIMESTAMP(8) | YYYYMMDD
|
TIMESTAMP(6) | YYMMDD
|
TIMESTAMP(4) | YYMM
|
TIMESTAMP(2) | YY
|
All TIMESTAMP columns have the same storage size, regardless of
display size. The most common display sizes are 6, 8, 12, and 14. You can
specify an arbitrary display size at table creation time, but values of 0 or
greater than 14 are coerced to 14. Odd-valued sizes in the range from 1 to
13 are coerced to the next higher even number.
Note: From version 4.1, TIMESTAMP is returned as string with
the format 'YYYY-MM-DD HH:MM:DD'. and different timestamp lengths are
no longer supported.
You can specify DATETIME, DATE, and TIMESTAMP values using
any of a common set of formats:
'YYYY-MM-DD HH:MM:SS' or 'YY-MM-DD
HH:MM:SS' format. A ``relaxed'' syntax is allowed--any punctuation
character may be used as the delimiter between date parts or time parts.
For example, '98-12-31 11:30:45', '98.12.31 11+30+45',
'98/12/31 11*30*45', and '98@12@31 11^30^45' are
equivalent.
'YYYY-MM-DD' or 'YY-MM-DD' format.
A ``relaxed'' syntax is allowed here, too. For example, '98-12-31',
'98.12.31', '98/12/31', and '98@12@31' are
equivalent.
'YYYYMMDDHHMMSS' or
'YYMMDDHHMMSS' format, provided that the string makes sense as a
date. For example, '19970523091528' and '970523091528' are
interpreted as '1997-05-23 09:15:28', but '971122129015' is
illegal (it has a nonsensical minute part) and becomes '0000-00-00
00:00:00'.
'YYYYMMDD' or 'YYMMDD'
format, provided that the string makes sense as a date. For example,
'19970523' and '970523' are interpreted as
'1997-05-23', but '971332' is illegal (it has nonsensical month
and day parts) and becomes '0000-00-00'.
YYYYMMDDHHMMSS or YYMMDDHHMMSS
format, provided that the number makes sense as a date. For example,
19830905132800 and 830905132800 are interpreted as
'1983-09-05 13:28:00'.
YYYYMMDD or YYMMDD
format, provided that the number makes sense as a date. For example,
19830905 and 830905 are interpreted as '1983-09-05'.
DATETIME, DATE, or TIMESTAMP context, such as
NOW() or CURRENT_DATE.
Illegal DATETIME, DATE, or TIMESTAMP values are converted
to the ``zero'' value of the appropriate type ('0000-00-00 00:00:00',
'0000-00-00', or 00000000000000).
For values specified as strings that include date part delimiters, it is not
necessary to specify two digits for month or day values that are less than
10. '1979-6-9' is the same as '1979-06-09'. Similarly,
for values specified as strings that include time part delimiters, it is not
necessary to specify two digits for hour, minute, or second values that are
less than 10. '1979-10-30 1:2:3' is the same as
'1979-10-30 01:02:03'.
Values specified as numbers should be 6, 8, 12, or 14 digits long. If the
number is 8 or 14 digits long, it is assumed to be in YYYYMMDD or
YYYYMMDDHHMMSS format and that the year is given by the first 4
digits. If the number is 6 or 12 digits long, it is assumed to be in
YYMMDD or YYMMDDHHMMSS format and that the year is given by the
first 2 digits. Numbers that are not one of these lengths are interpreted
as though padded with leading zeros to the closest length.
Values specified as non-delimited strings are interpreted using their length
as given. If the string is 8 or 14 characters long, the year is assumed to
be given by the first 4 characters. Otherwise, the year is assumed to be
given by the first 2 characters. The string is interpreted from left to
right to find year, month, day, hour, minute, and second values, for as many
parts as are present in the string. This means you should not use strings
that have fewer than 6 characters. For example, if you specify '9903',
thinking that will represent March, 1999, you will find that MySQL
inserts a ``zero'' date into your table. This is because the year and month
values are 99 and 03, but the day part is missing (zero), so
the value is not a legal date.
TIMESTAMP columns store legal values using the full precision with
which the value was specified, regardless of the display size. This has
several implications:
TIMESTAMP(4) or TIMESTAMP(2). Otherwise, the value will not
be a legal date and 0 will be stored.
ALTER TABLE to widen a narrow TIMESTAMP column,
information will be displayed that previously was ``hidden''.
TIMESTAMP column does not cause information to
be lost, except in the sense that less information is shown when the values
are displayed.
TIMESTAMP values are stored to full precision, the only
function that operates directly on the underlying stored value is
UNIX_TIMESTAMP(). Other functions operate on the formatted retrieved
value. This means you cannot use functions such as HOUR() or
SECOND() unless the relevant part of the TIMESTAMP value is
included in the formatted value. For example, the HH part of a
TIMESTAMP column is not displayed unless the display size is at least
10, so trying to use HOUR() on shorter TIMESTAMP values
produces a meaningless result.
You can to some extent assign values of one date type to an object of a different date type. However, there may be some alteration of the value or loss of information:
DATE value to a DATETIME or TIMESTAMP
object, the time part of the resulting value is set to '00:00:00',
because the DATE value contains no time information.
DATETIME or TIMESTAMP value to a DATE
object, the time part of the resulting value is deleted, because the
DATE type stores no time information.
DATETIME, DATE, and TIMESTAMP
values all can be specified using the same set of formats, the types do not
all have the same range of values. For example, TIMESTAMP values
cannot be earlier than 1970 or later than 2037. This means
that a date such as '1968-01-01', while legal as a DATETIME or
DATE value, is not a valid TIMESTAMP value and will be
converted to 0 if assigned to such an object.
Be aware of certain pitfalls when specifying date values:
'10:11:12' might look like a time value
because of the `:' delimiter, but if used in a date context will be
interpreted as the year '2010-11-12'. The value '10:45:15'
will be converted to '0000-00-00' because '45' is not a legal
month.
00-31, months 00-12, years 1000-9999.
Any date not within this range will revert to 0000-00-00.
Please note that this still allows you to store invalid dates such as
2002-04-31. It allows web applications to store data from a form
without further checking. To ensure a date is valid, perform a check in
your application.
00-69 are converted to 2000-2069.
70-99 are converted to 1970-1999.
TIME Type
MySQL retrieves and displays TIME values in 'HH:MM:SS'
format (or 'HHH:MM:SS' format for large hours values). TIME
values may range from '-838:59:59' to '838:59:59'. The reason
the hours part may be so large is that the TIME type may be used not
only to represent a time of day (which must be less than 24 hours), but also
elapsed time or a time interval between two events (which may be much greater
than 24 hours, or even negative).
You can specify TIME values in a variety of formats:
'D HH:MM:SS.fraction' format. (Note that
MySQL doesn't yet store the fraction for the time column.) One
can also use one of the following ``relaxed'' syntax:
HH:MM:SS.fraction, HH:MM:SS, HH:MM, D HH:MM:SS,
D HH:MM, D HH or SS. Here D is days between 0-33.
'HHMMSS' format, provided that
it makes sense as a time. For example, '101112' is understood as
'10:11:12', but '109712' is illegal (it has a nonsensical
minute part) and becomes '00:00:00'.
HHMMSS format, provided that it makes sense as a time.
For example, 101112 is understood as '10:11:12'. The following
alternative formats are also understood: SS, MMSS,HHMMSS,
HHMMSS.fraction. Note that MySQL doesn't yet store the
fraction part.
TIME context, such as CURRENT_TIME.
For TIME values specified as strings that include a time part
delimiter, it is not necessary to specify two digits for hours, minutes, or
seconds values that are less than 10. '8:3:2' is the same as
'08:03:02'.
Be careful about assigning ``short'' TIME values to a TIME
column. Without colons, MySQL interprets values using the
assumption that the rightmost digits represent seconds. (MySQL
interprets TIME values as elapsed time rather than as time of
day.) For example, you might think of '1112' and 1112 as
meaning '11:12:00' (12 minutes after 11 o'clock), but
MySQL interprets them as '00:11:12' (11 minutes, 12 seconds).
Similarly, '12' and 12 are interpreted as '00:00:12'.
TIME values with colons, by contrast, are always treated as
time of the day. That is '11:12' will mean '11:12:00',
not '00:11:12'.
Values that lie outside the TIME range
but are otherwise legal are clipped to the appropriate
endpoint of the range. For example, '-850:00:00' and
'850:00:00' are converted to '-838:59:59' and
'838:59:59'.
Illegal TIME values are converted to '00:00:00'. Note that
because '00:00:00' is itself a legal TIME value, there is no way
to tell, from a value of '00:00:00' stored in a table, whether the
original value was specified as '00:00:00' or whether it was illegal.
YEAR Type
The YEAR type is a 1-byte type used for representing years.
MySQL retrieves and displays YEAR values in YYYY
format. The range is 1901 to 2155.
You can specify YEAR values in a variety of formats:
'1901' to '2155'.
1901 to 2155.
'00' to '99'. Values in the
ranges '00' to '69' and '70' to '99' are
converted to YEAR values in the ranges 2000 to 2069 and
1970 to 1999.
1 to 99. Values in the
ranges 1 to 69 and 70 to 99 are converted to
YEAR values in the ranges 2001 to 2069 and 1970
to 1999. Note that the range for two-digit numbers is slightly
different from the range for two-digit strings, because you cannot specify zero
directly as a number and have it be interpreted as 2000. You
must specify it as a string '0' or '00' or it will be
interpreted as 0000.
YEAR context, such as NOW().
Illegal YEAR values are converted to 0000.
The string types are CHAR, VARCHAR, BLOB, TEXT,
ENUM, and SET. This section describes how these types work,
their storage requirements, and how to use them in your queries.
| Type | Max.size | Bytes |
TINYTEXT or TINYBLOB | 2^8-1 | 255 |
TEXT or BLOB | 2^16-1 (64K-1) | 65535 |
MEDIUMTEXT or MEDIUMBLOB | 2^24-1 (16M-1) | 16777215 |
LONGBLOB | 2^32-1 (4G-1) | 4294967295 |
CHAR and VARCHAR Types
The CHAR and VARCHAR types are similar, but differ in the
way they are stored and retrieved.
The length of a CHAR column is fixed to the length that you declare
when you create the table. The length can be any value between 1 and 255.
(As of MySQL Version 3.23, the length of CHAR may be 0 to 255.)
When CHAR values are stored, they are right-padded with spaces to the
specified length. When CHAR values are retrieved, trailing spaces are
removed.
Values in VARCHAR columns are variable-length strings. You can
declare a VARCHAR column to be any length between 1 and 255, just as
for CHAR columns. However, in contrast to CHAR, VARCHAR
values are stored using only as many characters as are needed, plus one byte
to record the length. Values are not padded; instead, trailing spaces are
removed when values are stored. (This space removal differs from the SQL-99
specification.) No case conversion takes place during storage or retrieval.
If you assign a value to a CHAR or VARCHAR column that
exceeds the column's maximum length, the value is truncated to fit.
The following table illustrates the differences between the two types of columns
by showing the result of storing various string values into CHAR(4)
and VARCHAR(4) columns:
| Value | CHAR(4) | Storage required | VARCHAR(4) | Storage required |
'' | ' ' | 4 bytes | '' | 1 byte |
'ab' | 'ab ' | 4 bytes | 'ab' | 3 bytes |
'abcd' | 'abcd' | 4 bytes | 'abcd' | 5 bytes |
'abcdefgh' | 'abcd' | 4 bytes | 'abcd' | 5 bytes |
The values retrieved from the CHAR(4) and VARCHAR(4) columns
will be the same in each case, because trailing spaces are removed from
CHAR columns upon retrieval.
Values in CHAR and VARCHAR columns are sorted and compared
in case-insensitive fashion, unless the BINARY attribute was
specified when the table was created. The BINARY attribute means
that column values are sorted and compared in case-sensitive fashion
according to the ASCII order of the machine where the MySQL
server is running. BINARY doesn't affect how the column is stored
or retrieved.
The BINARY attribute is sticky. This means that if a column marked
BINARY is used in an expression, the whole expression is compared as a
BINARY value.
MySQL may silently change the type of a CHAR or VARCHAR
column at table creation time.
See section 6.5.3.1 Silent Column Specification Changes.
BLOB and TEXT Types
A BLOB is a binary large object that can hold a variable amount of
data. The four BLOB types TINYBLOB, BLOB,
MEDIUMBLOB, and LONGBLOB differ only in the maximum length of
the values they can hold.
See section 6.2.6 Column Type Storage Requirements.
The four TEXT types TINYTEXT, TEXT, MEDIUMTEXT,
and LONGTEXT correspond to the four BLOB types and have the
same maximum lengths and storage requirements. The only difference between
BLOB and TEXT types is that sorting and comparison is performed
in case-sensitive fashion for BLOB values and case-insensitive fashion
for TEXT values. In other words, a TEXT is a case-insensitive
BLOB. No case conversion takes place during storage or retrieval.
If you assign a value to a BLOB or TEXT column that exceeds
the column type's maximum length, the value is truncated to fit.
In most respects, you can regard a TEXT column as a VARCHAR
column that can be as big as you like. Similarly, you can regard a
BLOB column as a VARCHAR BINARY column. The differences are:
BLOB and TEXT columns with
MySQL Version 3.23.2 and newer. Older versions of
MySQL did not support this.
BLOB and TEXT columns
when values are stored, as there is for VARCHAR columns.
BLOB and TEXT columns cannot have DEFAULT values.
MyODBC defines BLOB values as LONGVARBINARY and
TEXT values as LONGVARCHAR.
Because BLOB and TEXT values may be extremely long, you
may run up against some constraints when using them:
GROUP BY or ORDER BY on a BLOB or
TEXT column, you must convert the column value into a fixed-length
object. The standard way to do this is with the SUBSTRING
function. For example:
mysql> SELECT comment FROM tbl_name,SUBSTRING(comment,20) AS substr
-> ORDER BY substr;
If you don't do this, only the first max_sort_length bytes of the
column are used when sorting. The default value of max_sort_length is
1024; this value can be changed using the -O option when starting the
mysqld server. You can group on an expression involving BLOB or
TEXT values by specifying the column position or by using an alias:
mysql> SELECT id,SUBSTRING(blob_col,1,100) FROM tbl_name GROUP BY 2; mysql> SELECT id,SUBSTRING(blob_col,1,100) AS b FROM tbl_name GROUP BY b;
BLOB or TEXT object is determined by its
type, but the largest value you can actually transmit between the client and
server is determined by the amount of available memory and the size of the
communications buffers. You can change the message buffer size, but you must
do so on both the server and client ends. See section 5.5.2 Tuning Server Parameters.
Note that each BLOB or TEXT value is represented
internally by a separately allocated object. This is in contrast to all
other column types, for which storage is allocated once per column when
the table is opened.
ENUM Type
An ENUM is a string object whose value normally is chosen from a list
of allowed values that are enumerated explicitly in the column specification
at table creation time.
The value may also be the empty string ("") or NULL under
certain circumstances:
ENUM (that is, a string not
present in the list of allowed values), the empty string is inserted
instead as a special error value. This string can be distinguished from a
'normal' empty string by the fact that this string has the numerical value
0. More about this later.
ENUM is declared NULL, NULL is also a legal value
for the column, and the default value is NULL. If an ENUM is
declared NOT NULL, the default value is the first element of the
list of allowed values.
Each enumeration value has an index:
SELECT statement to find rows into which invalid
ENUM values were assigned:
mysql> SELECT * FROM tbl_name WHERE enum_col=0;
NULL value is NULL.
For example, a column specified as ENUM("one", "two", "three") can
have any of the values shown here. The index of each value is also shown:
| Value | Index |
NULL | NULL
|
"" | 0 |
"one" | 1 |
"two" | 2 |
"three" | 3 |
An enumeration can have a maximum of 65535 elements.
Starting from 3.23.51 trailing spaces are automatically deleted from
ENUM values when the table is created.
Lettercase is irrelevant when you assign values to an ENUM column.
However, values retrieved from the column later have lettercase matching the
values that were used to specify the allowable values at table creation time.
If you retrieve an ENUM in a numeric context, the column value's
index is returned. For example, you can retrieve numeric values from
an ENUM column like this:
mysql> SELECT enum_col+0 FROM tbl_name;
If you store a number into an ENUM, the number is treated as an
index, and the value stored is the enumeration member with that index.
(However, this will not work with LOAD DATA, which treats all
input as strings.)
It's not advisable to store numbers in an ENUM string because
it will make things confusing.
ENUM values are sorted according to the order in which the enumeration
members were listed in the column specification. (In other words,
ENUM values are sorted according to their index numbers.) For
example, "a" sorts before "b" for ENUM("a", "b"), but
"b" sorts before "a" for ENUM("b", "a"). The empty
string sorts before non-empty strings, and NULL values sort before
all other enumeration values.
To prevent unexpected results, specify the ENUM list in alphabetical
order. You can also use GROUP BY CONCAT(col) to make sure the column
is sorted alphabetically rather than by index number.
If you want to get all possible values for an ENUM column, you should
use: SHOW COLUMNS FROM table_name LIKE enum_column_name and parse
the ENUM definition in the second column.
SET Type
A SET is a string object that can have zero or more values, each of
which must be chosen from a list of allowed values specified when the table
is created. SET column values that consist of multiple set members
are specified with members separated by commas (`,'). A consequence of
this is that SET member values cannot themselves contain commas.
For example, a column specified as SET("one", "two") NOT NULL can have
any of these values:
"" "one" "two" "one,two"
A SET can have a maximum of 64 different members.
Starting from 3.23.51 trailing spaces are automatically deleted from
SET values when the table is created.
MySQL stores SET values numerically, with the low-order bit
of the stored value corresponding to the first set member. If you retrieve a
SET value in a numeric context, the value retrieved has bits set
corresponding to the set members that make up the column value. For example,
you can retrieve numeric values from a SET column like this:
mysql> SELECT set_col+0 FROM tbl_name;
If a number is stored into a SET column, the bits that
are set in the binary representation of the number determine the
set members in the column value. Suppose a column is specified as
SET("a","b","c","d"). Then the members have the following bit
values:
SET member | Decimal value | Binary value |
a | 1 | 0001
|
b | 2 | 0010
|
c | 4 | 0100
|
d | 8 | 1000
|
If you assign a value of 9 to this column, that is 1001 in
binary, so the first and fourth SET value members "a" and
"d" are selected and the resulting value is "a,d".
For a value containing more than one SET element, it does not matter
what order the elements are listed in when you insert the value. It also
does not matter how many times a given element is listed in the value.
When the value is retrieved later, each element in the value will appear
once, with elements listed according to the order in which they were
specified at table creation time. For example, if a column is specified as
SET("a","b","c","d"), then "a,d", "d,a", and
"d,a,a,d,d" will all appear as "a,d" when retrieved.
If you set a SET column to an unsupported value, the value will
be ignored.
SET values are sorted numerically. NULL values sort before
non-NULL SET values.
Normally, you perform a SELECT on a SET column using
the LIKE operator or the FIND_IN_SET() function:
mysql> SELECT * FROM tbl_name WHERE set_col LIKE '%value%';
mysql> SELECT * FROM tbl_name WHERE FIND_IN_SET('value',set_col)>0;
But the following will also work:
mysql> SELECT * FROM tbl_name WHERE set_col = 'val1,val2'; mysql> SELECT * FROM tbl_name WHERE set_col & 1;
The first of these statements looks for an exact match. The second looks for values containing the first set member.
If you want to get all possible values for a SET column, you should
use: SHOW COLUMNS FROM table_name LIKE set_column_name and parse
the SET definition in the second column.
For the most efficient use of storage, try to use the most precise type in
all cases. For example, if an integer column will be used for values in the
range between 1 and 99999, MEDIUMINT UNSIGNED is the
best type.
Accurate representation of monetary values is a common problem. In
MySQL, you should use the DECIMAL type. This is stored as
a string, so no loss of accuracy should occur. If accuracy is not
too important, the DOUBLE type may also be good enough.
For high precision, you can always convert to a fixed-point type stored
in a BIGINT. This allows you to do all calculations with integers
and convert results back to floating-point values only when necessary.
To make it easier to use code written for SQL implementations from other vendors, MySQL maps column types as shown in the following table. These mappings make it easier to move table definitions from other database engines to MySQL:
| Other vendor type | MySQL type |
BINARY(NUM) | CHAR(NUM) BINARY
|
CHAR VARYING(NUM) | VARCHAR(NUM)
|
FLOAT4 | FLOAT
|
FLOAT8 | DOUBLE
|
INT1 | TINYINT
|
INT2 | SMALLINT
|
INT3 | MEDIUMINT
|
INT4 | INT
|
INT8 | BIGINT
|
LONG VARBINARY | MEDIUMBLOB
|
LONG VARCHAR | MEDIUMTEXT
|
MIDDLEINT | MEDIUMINT
|
VARBINARY(NUM) | VARCHAR(NUM) BINARY
|
Column type mapping occurs at table creation time. If you create a table
with types used by other vendors and then issue a DESCRIBE tbl_name
statement, MySQL reports the table structure using the equivalent
MySQL types.
The storage requirements for each of the column types supported by MySQL are listed by category.
| Column type | Storage required |
TINYINT | 1 byte |
SMALLINT | 2 bytes |
MEDIUMINT | 3 bytes |
INT | 4 bytes |
INTEGER | 4 bytes |
BIGINT | 8 bytes |
FLOAT(X) | 4 if X <= 24 or 8 if 25 <= X <= 53 |
FLOAT | 4 bytes |
DOUBLE | 8 bytes |
DOUBLE PRECISION | 8 bytes |
REAL | 8 bytes |
DECIMAL(M,D) | M+2 bytes if D > 0, M+1 bytes if D = 0 (D+2, if M < D)
|
NUMERIC(M,D) | M+2 bytes if D > 0, M+1 bytes if D = 0 (D+2, if M < D)
|
| Column type | Storage required |
DATE | 3 bytes |
DATETIME | 8 bytes |
TIMESTAMP | 4 bytes |
TIME | 3 bytes |
YEAR | 1 byte |
| Column type | Storage required |
CHAR(M) | M bytes, 1 <= M <= 255
|
VARCHAR(M) | L+1 bytes, where L <= M and
1 <= M <= 255
|
TINYBLOB, TINYTEXT | L+1 bytes,
where L < 2^8
|
BLOB, TEXT | L+2 bytes,
where L < 2^16
|
MEDIUMBLOB, MEDIUMTEXT | L+3 bytes,
where L < 2^24
|
LONGBLOB, LONGTEXT | L+4 bytes,
where L < 2^32
|
ENUM('value1','value2',...) | 1 or 2 bytes, depending on the number of enumeration values (65535 values maximum) |
SET('value1','value2',...) | 1, 2, 3, 4 or 8 bytes, depending on the number of set members (64 members maximum) |
VARCHAR and the BLOB and TEXT types are variable-length
types, for which the storage requirements depend on the actual length of
column values (represented by L in the preceding table), rather than
on the type's maximum possible size. For example, a VARCHAR(10)
column can hold a string with a maximum length of 10 characters. The actual
storage required is the length of the string (L), plus 1 byte to
record the length of the string. For the string 'abcd', L is 4
and the storage requirement is 5 bytes.
The BLOB and TEXT types require 1, 2, 3, or 4 bytes to record
the length of the column value, depending on the maximum possible length of
the type. See section 6.2.3.2 The BLOB and TEXT Types.
If a table includes any variable-length column types, the record format will also be variable-length. Note that when a table is created, MySQL may, under certain conditions, change a column from a variable-length type to a fixed-length type, or vice-versa. See section 6.5.3.1 Silent Column Specification Changes.
The size of an ENUM object is determined by the number of
different enumeration values. One byte is used for enumerations with up
to 255 possible values. Two bytes are used for enumerations with up to
65535 values. See section 6.2.3.3 The ENUM Type.
The size of a SET object is determined by the number of different
set members. If the set size is N, the object occupies (N+7)/8
bytes, rounded up to 1, 2, 3, 4, or 8 bytes. A SET can have a maximum
of 64 members. See section 6.2.3.4 The SET Type.
The maximum size of a row in a MyISAM table is 65534 bytes. Each
BLOB and TEXT column accounts for only 5-9 bytes
towards this size.
SELECT and WHERE Clauses
A select_expression or where_definition in a SQL statement
can consist of any expression using the functions described below.
An expression that contains NULL always produces a NULL value
unless otherwise indicated in the documentation for the operators and
functions involved in the expression.
Note: there must be no whitespace between a function name and the parentheses following it. This helps the MySQL parser distinguish between function calls and references to tables or columns that happen to have the same name as a function. Spaces around arguments are permitted, though.
You can force MySQL to accept spaces after the function name by
starting mysqld with --ansi or using the
CLIENT_IGNORE_SPACE to mysql_connect(), but in this case all
function names will become reserved words. See section 1.8.2 Running MySQL in ANSI Mode.
For the sake of brevity, examples display the output from the mysql
program in abbreviated form. So this:
mysql> SELECT MOD(29,9); 1 rows in set (0.00 sec) +-----------+ | mod(29,9) | +-----------+ | 2 | +-----------+
is displayed like this:
mysql> SELECT MOD(29,9);
-> 2
( ... )
Use parentheses to force the order of evaluation in an expression. For example:
mysql> SELECT 1+2*3;
-> 7
mysql> SELECT (1+2)*3;
-> 9
Comparison operations result in a value of 1 (TRUE), 0 (FALSE),
or NULL. These functions work for both numbers and strings. Strings
are automatically converted to numbers and numbers to strings as needed (as
in Perl).
MySQL performs comparisons using the following rules:
NULL, the result of the comparison
is NULL, except for the <=> operator.
TIMESTAMP or DATETIME column and
the other argument is a constant, the constant is converted
to a timestamp before the comparison is performed. This is done to be more
ODBC-friendly.
By default, string comparisons are done in case-independent fashion using the current character set (ISO-8859-1 Latin1 by default, which also works excellently for English).
The following examples illustrate conversion of strings to numbers for comparison operations:
mysql> SELECT 1 > '6x';
-> 0
mysql> SELECT 7 > '6x';
-> 1
mysql> SELECT 0 > 'x6';
-> 0
mysql> SELECT 0 = 'x6';
-> 1
=
mysql> SELECT 1 = 0;
-> 0
mysql> SELECT '0' = 0;
-> 1
mysql> SELECT '0.0' = 0;
-> 1
mysql> SELECT '0.01' = 0;
-> 0
mysql> SELECT '.01' = 0.01;
-> 1
<>
!=
mysql> SELECT '.01' <> '0.01';
-> 1
mysql> SELECT .01 <> '0.01';
-> 0
mysql> SELECT 'zapp' <> 'zappp';
-> 1
<=
mysql> SELECT 0.1 <= 2;
-> 1
<
mysql> SELECT 2 < 2;
-> 0
>=
mysql> SELECT 2 >= 2;
-> 1
>
mysql> SELECT 2 > 2;
-> 0
<=>
mysql> SELECT 1 <=> 1, NULL <=> NULL, 1 <=> NULL;
-> 1 1 0
IS NULL
IS NOT NULL
NULL:
mysql> SELECT 1 IS NULL, 0 IS NULL, NULL IS NULL;
-> 0 0 1
mysql> SELECT 1 IS NOT NULL, 0 IS NOT NULL, NULL IS NOT NULL;
-> 1 1 0
To be able to work good with other programs, MySQL supports the following
extra features when using IS NULL:
SELECT * FROM tbl_name WHERE auto_col IS NULLThis can be disabled by setting
SQL_AUTO_IS_NULL=0. See section 5.5.6 SET Syntax.
NOT NULL DATE and DATETIME columns you can find
the special date 0000-00-00 by using:
SELECT * FROM tbl_name WHERE date_column IS NULLThis is needed to get some ODBC applications to work (as ODBC doesn't support a
0000-00-00 date)
expr BETWEEN min AND max
expr is greater than or equal to min and expr is
less than or equal to max, BETWEEN returns 1,
otherwise it returns 0. This is equivalent to the expression
(min <= expr AND expr <= max) if all the arguments are of the
same type. Otherwise type conversion takes place, according to the rules
above, but applied to all the three arguments. Note that before
4.0.5 arguments were converted to the type of expr instead.
mysql> SELECT 1 BETWEEN 2 AND 3;
-> 0
mysql> SELECT 'b' BETWEEN 'a' AND 'c';
-> 1
mysql> SELECT 2 BETWEEN 2 AND '3';
-> 1
mysql> SELECT 2 BETWEEN 2 AND 'x-3';
-> 0
expr NOT BETWEEN min AND max
NOT (expr BETWEEN min AND max).
expr IN (value,...)
1 if expr is any of the values in the IN list,
else returns 0. If all values are constants, then all values are
evaluated according to the type of expr and sorted. The search for the
item is then done using a binary search. This means IN is very quick
if the IN value list consists entirely of constants. If expr
is a case-sensitive string expression, the string comparison is performed in
case-sensitive fashion:
mysql> SELECT 2 IN (0,3,5,'wefwf');
-> 0
mysql> SELECT 'wefwf' IN (0,3,5,'wefwf');
-> 1
The number of values in the IN list is only limited by the
max_allowed_packet value.
From 4.1 (to comply with the SQL-99 standard), IN returns NULL
not only if the expression on the left hand side is NULL, but also if
no match is found in the list and one of the expressions in the list is
NULL.
expr NOT IN (value,...)
NOT (expr IN (value,...)).
ISNULL(expr)
expr is NULL, ISNULL() returns 1, otherwise
it returns 0:
mysql> SELECT ISNULL(1+1);
-> 0
mysql> SELECT ISNULL(1/0);
-> 1
Note that a comparison of NULL values using = will always be
false!
COALESCE(list)
NULL element in list:
mysql> SELECT COALESCE(NULL,1);
-> 1
mysql> SELECT COALESCE(NULL,NULL,NULL);
-> NULL
INTERVAL(N,N1,N2,N3,...)
0 if N < N1, 1 if N < N2
and so on. All arguments are treated as integers. It is required that
N1 < N2 < N3 < ... < Nn for this function
to work correctly. This is because a binary search is used (very fast):
mysql> SELECT INTERVAL(23, 1, 15, 17, 30, 44, 200);
-> 3
mysql> SELECT INTERVAL(10, 1, 10, 100, 1000);
-> 2
mysql> SELECT INTERVAL(22, 23, 30, 44, 200);
-> 0
If you are comparing case-insensitive strings with any of the standard
operators (=, <>..., but not LIKE) trailing whitespace
(spaces, tabs and newlines) will be ignored.
mysql> SELECT "a" ="A \n";
-> 1
In SQL, all logical operators evaluate to TRUE, FALSE or NULL (UNKNOWN).
In MySQL, this is implemented as 1 (TRUE), 0 (FALSE),
and NULL. Most of this is common between different SQL databases,
however some may return any non-zero value for TRUE.
NOT
!
1 if the operand is 0,
to 0 if the operand is non-zero,
and NOT NULL returns NULL.
mysql> SELECT NOT 10;
-> 0
mysql> SELECT NOT 0;
-> 1
mysql> SELECT NOT NULL;
-> NULL
mysql> SELECT ! (1+1);
-> 0
mysql> SELECT ! 1+1;
-> 1
The last example produces 1 because the expression evaluates
the same way as (!1)+1.
AND
&&
1 if all operands are non-zero and not NULL,
to 0 if one or more operands are 0,
otherwise NULL is returned.
mysql> SELECT 1 && 1;
-> 1
mysql> SELECT 1 && 0;
-> 0
mysql> SELECT 1 && NULL;
-> NULL
mysql> SELECT 0 && NULL;
-> 0
mysql> SELECT NULL && 0;
-> 0
Please note that MySQL versions prior to 4.0.5 stop evaluation when
a NULL is encountered, rather than continuing the process to
check for possible 0s. This means that in these versions,
SELECT (NULL AND 0) returns NULL instead of 0.
In 4.0.5 the code has been re-engineered so that the result will
always be as prescribed by the SQL standards while still using the
optimisation wherever possible.
OR
||
1 if any operand is non-zero,
to NULL if any operand is NULL,
otherwise 0 is returned.
mysql> SELECT 1 || 1;
-> 1
mysql> SELECT 1 || 0;
-> 1
mysql> SELECT 0 || 0;
-> 0
mysql> SELECT 0 || NULL;
-> NULL
mysql> SELECT 1 || NULL;
-> 1
XOR
NULL if either operand is NULL.
For non-NULL operands, evaluates to 1 if an odd number
of operands is non-zero,
otherwise 0 is returned.
example_for_help_topic XOR
mysql> SELECT 1 XOR 1;
-> 0
mysql> SELECT 1 XOR 0;
-> 1
mysql> SELECT 1 XOR NULL;
-> NULL
mysql> SELECT 1 XOR 1 XOR 1;
-> 1
a XOR b is mathematically equal to
(a AND (NOT b)) OR ((NOT a) and b).
XOR was added in version 4.0.2.
IFNULL(expr1,expr2)
expr1 is not NULL, IFNULL() returns expr1,
else it returns expr2. IFNULL() returns a numeric or string
value, depending on the context in which it is used:
mysql> SELECT IFNULL(1,0);
-> 1
mysql> SELECT IFNULL(NULL,10);
-> 10
mysql> SELECT IFNULL(1/0,10);
-> 10
mysql> SELECT IFNULL(1/0,'yes');
-> 'yes'
In 4.0.6 and above the default result value of
IFNULL(expr1,expr2) is the more 'general' of the two expressions,
in the order STRING, REAL or INTEGER.The difference
to earlier MySQL versions are mostly notable when you create a table
based on expressions or MySQL has to internally store a value from
IFNULL() in a temporary table.
CREATE TABLE foo SELECT IFNULL(1,"test") as test;In MySQL 4.0.6 the type for column 'test' is
CHAR(4) while in
earlier versions you would get BIGINT.
NULLIF(expr1,expr2)
expr1 = expr2 is true, return NULL else return expr1.
This is the same as CASE WHEN x = y THEN NULL ELSE x END:
mysql> SELECT NULLIF(1,1);
-> NULL
mysql> SELECT NULLIF(1,2);
-> 1
Note that expr1 is evaluated twice in MySQL if the arguments
are not equal.
IF(expr1,expr2,expr3)
expr1 is TRUE (expr1 <> 0 and expr1 <> NULL) then
IF() returns expr2, else it returns expr3.
IF() returns a numeric or string value, depending on the context
in which it is used:
mysql> SELECT IF(1>2,2,3);
-> 3
mysql> SELECT IF(1<2,'yes','no');
-> 'yes'
mysql> SELECT IF(STRCMP('test','test1'),'no','yes');
-> 'no'
If expr2 or expr3 is explicitely NULL then the
result type of the IF() function is the type of the not
NULL column. (This behaviour is new in MySQL 4.0.3).
expr1 is evaluated as an integer value, which means that if you are
testing floating-point or string values, you should do so using a comparison
operation:
mysql> SELECT IF(0.1,1,0);
-> 0
mysql> SELECT IF(0.1<>0,1,0);
-> 1
In the first case above, IF(0.1) returns 0 because 0.1
is converted to an integer value, resulting in a test of IF(0). This
may not be what you expect. In the second case, the comparison tests the
original floating-point value to see whether it is non-zero. The result
of the comparison is used as an integer.
The default return type of IF() (which may matter when it is
stored into a temporary table) is calculated in MySQL Version
3.23 as follows:
| Expression | Return value |
| expr2 or expr3 returns string | string |
| expr2 or expr3 returns a floating-point value | floating-point |
| expr2 or expr3 returns an integer | integer |
CASE value WHEN [compare-value] THEN result [WHEN [compare-value] THEN result ...] [ELSE result] END
CASE WHEN [condition] THEN result [WHEN [condition] THEN result ...] [ELSE result] END
result where
value=compare-value. The second version returns the result for
the first condition, which is true. If there was no matching result
value, then the result after ELSE is returned. If there is no
ELSE part then NULL is returned:
mysql> SELECT CASE 1 WHEN 1 THEN "one"
WHEN 2 THEN "two" ELSE "more" END;
-> "one"
mysql> SELECT CASE WHEN 1>0 THEN "true" ELSE "false" END;
-> "true"
mysql> SELECT CASE BINARY "B" WHEN "a" THEN 1 WHEN "b" THEN 2 END;
-> NULL
The type of the return value (INTEGER, DOUBLE or
STRING) is the same as the type of the first returned value (the
expression after the first THEN).
String-valued functions return NULL if the length of the result would
be greater than the max_allowed_packet server parameter. See section 5.5.2 Tuning Server Parameters.
For functions that operate on string positions, the first position is numbered 1.
ASCII(str)
str. Returns 0 if str is the empty string. Returns
NULL if str is NULL:
mysql> SELECT ASCII('2');
-> 50
mysql> SELECT ASCII(2);
-> 50
mysql> SELECT ASCII('dx');
-> 100
See also the ORD() function.
ORD(str)
str is a multi-byte character,
returns the code for that character, calculated from the ASCII code values
of its constituent characters using this formula:
((first byte ASCII code)*256+(second byte ASCII code))[*256+third byte ASCII code...].
If the leftmost character is not a multi-byte character, returns the same
value that the ASCII() function does:
mysql> SELECT ORD('2');
-> 50
CONV(N,from_base,to_base)
N, converted from base from_base
to base to_base. Returns NULL if any argument is NULL.
The argument N is interpreted as an integer, but may be specified as
an integer or a string. The minimum base is 2 and the maximum base is
36. If to_base is a negative number, N is regarded as a
signed number. Otherwise, N is treated as unsigned. CONV works
with 64-bit precision:
mysql> SELECT CONV("a",16,2);
-> '1010'
mysql> SELECT CONV("6E",18,8);
-> '172'
mysql> SELECT CONV(-17,10,-18);
-> '-H'
mysql> SELECT CONV(10+"10"+'10'+0xa,10,10);
-> '40'
BIN(N)
N, where
N is a longlong (BIGINT) number. This is equivalent to
CONV(N,10,2). Returns NULL if N is NULL:
mysql> SELECT BIN(12);
-> '1100'
OCT(N)
N, where
N is a longlong number. This is equivalent to CONV(N,10,8).
Returns NULL if N is NULL:
mysql> SELECT OCT(12);
-> '14'
HEX(N_or_S)
N, where N is a longlong (BIGINT) number.
This is equivalent to CONV(N,10,16).
If N_OR_S is a string, returns a hexadecimal string of N_OR_S where each
character in N_OR_S is converted to 2 hexadecimal digits. This is the
invers of the 0xff strings.
mysql> SELECT HEX(255);
-> 'FF'
mysql> SELECT HEX("abc");
-> 616263
mysql> SELECT 0x616263;
-> "abc"
CHAR(N,...)
CHAR() interprets the arguments as integers and returns a string
consisting of the characters given by the ASCII code values of those
integers. NULL values are skipped:
mysql> SELECT CHAR(77,121,83,81,'76');
-> 'MySQL'
mysql> SELECT CHAR(77,77.3,'77.3');
-> 'MMM'
CONCAT(str1,str2,...)
NULL if any argument is NULL. May have more than 2 arguments.
A numeric argument is converted to the equivalent string form:
mysql> SELECT CONCAT('My', 'S', 'QL');
-> 'MySQL'
mysql> SELECT CONCAT('My', NULL, 'QL');
-> NULL
mysql> SELECT CONCAT(14.3);
-> '14.3'
CONCAT_WS(separator, str1, str2,...)
CONCAT_WS() stands for CONCAT With Separator and is a special form of
CONCAT(). The first argument is the separator for the rest of the
arguments. The separator can be a string as well as the rest of the
arguments. If the separator is NULL, the result will be NULL.
The function will skip any NULLs and empty strings, after the
separator argument. The separator will be added between the strings to be
concatenated:
mysql> SELECT CONCAT_WS(",","First name","Second name","Last Name");
-> 'First name,Second name,Last Name'
mysql> SELECT CONCAT_WS(",","First name",NULL,"Last Name");
-> 'First name,Last Name'
LENGTH(str)
OCTET_LENGTH(str)
CHAR_LENGTH(str)
CHARACTER_LENGTH(str)
str:
mysql> SELECT LENGTH('text');
-> 4
mysql> SELECT OCTET_LENGTH('text');
-> 4
Note that for CHAR_LENGTH() and CHARACTER_LENGTH(), multi-byte
characters are only counted once.
BIT_LENGTH(str)
str in bits:
mysql> SELECT BIT_LENGTH('text');
-> 32
LOCATE(substr,str)
POSITION(substr IN str)
substr
in string str. Returns 0 if substr is not in str:
mysql> SELECT LOCATE('bar', 'foobarbar');
-> 4
mysql> SELECT LOCATE('xbar', 'foobar');
-> 0
This function is multi-byte safe. In MySQL 3.23 this function is case
sensitive, while in 4.0 it's only case-sensitive if either argument is
a binary string.
LOCATE(substr,str,pos)
substr in
string str, starting at position pos.
Returns 0 if substr is not in str:
mysql> SELECT LOCATE('bar', 'foobarbar',5);
-> 7
This function is multi-byte safe. In MySQL 3.23 this function is case
sensitive, while in 4.0 it's only case-sensitive if either argument is
a binary string.
INSTR(str,substr)
substr in
string str. This is the same as the two-argument form of
LOCATE(), except that the arguments are swapped:
mysql> SELECT INSTR('foobarbar', 'bar');
-> 4
mysql> SELECT INSTR('xbar', 'foobar');
-> 0
This function is multi-byte safe. In MySQL 3.23 this function is case
sensitive, while in 4.0 it's only case-sensitive if either argument is
a binary string.
LPAD(str,len,padstr)
str, left-padded with the string padstr
until str is len characters long. If str is longer
than len' then it will be shortened to len characters.
mysql> SELECT LPAD('hi',4,'??');
-> '??hi'
RPAD(str,len,padstr)
str, right-padded with the string
padstr until str is len characters long. If
str is longer than len' then it will be shortened to
len characters.
mysql> SELECT RPAD('hi',5,'?');
-> 'hi???'
LEFT(str,len)
len characters from the string str:
mysql> SELECT LEFT('foobarbar', 5);
-> 'fooba'
This function is multi-byte safe.
RIGHT(str,len)
len characters from the string str:
mysql> SELECT RIGHT('foobarbar', 4);
-> 'rbar'
This function is multi-byte safe.
SUBSTRING(str,pos,len)
SUBSTRING(str FROM pos FOR len)
MID(str,pos,len)
len characters long from string str,
starting at position pos.
The variant form that uses FROM is SQL-92 syntax:
mysql> SELECT SUBSTRING('Quadratically',5,6);
-> 'ratica'
This function is multi-byte safe.
SUBSTRING(str,pos)
SUBSTRING(str FROM pos)
str starting at position pos:
mysql> SELECT SUBSTRING('Quadratically',5);
-> 'ratically'
mysql> SELECT SUBSTRING('foobarbar' FROM 4);
-> 'barbar'
This function is multi-byte safe.
SUBSTRING_INDEX(str,delim,count)
str before count
occurrences of the delimiter delim.
If count is positive, everything to the left of the final delimiter
(counting from the left) is returned.
If count is negative, everything to the right of the final delimiter
(counting from the right) is returned:
mysql> SELECT SUBSTRING_INDEX('www.mysql.com', '.', 2);
-> 'www.mysql'
mysql> SELECT SUBSTRING_INDEX('www.mysql.com', '.', -2);
-> 'mysql.com'
This function is multi-byte safe.
LTRIM(str)
str with leading space characters removed:
mysql> SELECT LTRIM(' barbar');
-> 'barbar'
RTRIM(str)
str with trailing space characters removed:
mysql> SELECT RTRIM('barbar ');
-> 'barbar'
This function is multi-byte safe.
TRIM([[BOTH | LEADING | TRAILING] [remstr] FROM] str)
str with all remstr prefixes and/or suffixes
removed. If none of the specifiers BOTH, LEADING or
TRAILING are given, BOTH is assumed. If remstr is not
specified, spaces are removed:
mysql> SELECT TRIM(' bar ');
-> 'bar'
mysql> SELECT TRIM(LEADING 'x' FROM 'xxxbarxxx');
-> 'barxxx'
mysql> SELECT TRIM(BOTH 'x' FROM 'xxxbarxxx');
-> 'bar'
mysql> SELECT TRIM(TRAILING 'xyz' FROM 'barxxyz');
-> 'barx'
This function is multi-byte safe.
SOUNDEX(str)
str. Two strings that sound almost the
same should have identical soundex strings. A standard soundex string
is 4 characters long, but the SOUNDEX() function returns an
arbitrarily long string. You can use SUBSTRING() on the result to get
a standard soundex string. All non-alphanumeric characters are ignored
in the given string. All international alpha characters outside the A-Z range
are treated as vowels:
mysql> SELECT SOUNDEX('Hello');
-> 'H400'
mysql> SELECT SOUNDEX('Quadratically');
-> 'Q36324'
SPACE(N)
N space characters:
mysql> SELECT SPACE(6);
-> ' '
REPLACE(str,from_str,to_str)
str with all occurrences of the string
from_str replaced by the string to_str:
mysql> SELECT REPLACE('www.mysql.com', 'w', 'Ww');
-> 'WwWwWw.mysql.com'
This function is multi-byte safe.
REPEAT(str,count)
str repeated count
times. If count <= 0, returns an empty string. Returns NULL if
str or count are NULL:
mysql> SELECT REPEAT('MySQL', 3);
-> 'MySQLMySQLMySQL'
REVERSE(str)
str with the order of the characters reversed:
mysql> SELECT REVERSE('abc');
-> 'cba'
This function is multi-byte safe.
INSERT(str,pos,len,newstr)
str, with the substring beginning at position
pos and len characters long replaced by the string
newstr:
mysql> SELECT INSERT('Quadratic', 3, 4, 'What');
-> 'QuWhattic'
This function is multi-byte safe.
ELT(N,str1,str2,str3,...)
str1 if N = 1, str2 if N =
2, and so on. Returns NULL if N is less than 1
or greater than the number of arguments. ELT() is the complement of
FIELD():
mysql> SELECT ELT(1, 'ej', 'Heja', 'hej', 'foo');
-> 'ej'
mysql> SELECT ELT(4, 'ej', 'Heja', 'hej', 'foo');
-> 'foo'
FIELD(str,str1,str2,str3,...)
str in the str1, str2,
str3, ... list.
Returns 0 if str is not found.
FIELD() is the complement of ELT():
mysql> SELECT FIELD('ej', 'Hej', 'ej', 'Heja', 'hej', 'foo');
-> 2
mysql> SELECT FIELD('fo', 'Hej', 'ej', 'Heja', 'hej', 'foo');
-> 0
FIND_IN_SET(str,strlist)
1 to N if the string str is in the list
strlist consisting of N substrings. A string list is a string
composed of substrings separated by `,' characters. If the first
argument is a constant string and the second is a column of type SET,
the FIND_IN_SET() function is optimised to use bit arithmetic!
Returns 0 if str is not in strlist or if strlist
is the empty string. Returns NULL if either argument is NULL.
This function will not work properly if the first argument contains a
`,':
mysql> SELECT FIND_IN_SET('b','a,b,c,d');
-> 2
MAKE_SET(bits,str1,str2,...)
bits set. str1 corresponds to bit 0, str2 to bit 1,
etc. NULL strings in str1, str2, ...
are not appended to the result:
mysql> SELECT MAKE_SET(1,'a','b','c');
-> 'a'
mysql> SELECT MAKE_SET(1 | 4,'hello','nice','world');
-> 'hello,world'
mysql> SELECT MAKE_SET(0,'a','b','c');
-> ''
EXPORT_SET(bits,on,off,[separator,[number_of_bits]])
mysql> SELECT EXPORT_SET(5,'Y','N',',',4)
-> Y,N,Y,N
LCASE(str)
LOWER(str)
str with all characters changed to lowercase
according to the current character set mapping (the default is ISO-8859-1
Latin1):
mysql> SELECT LCASE('QUADRATICALLY');
-> 'quadratically'
This function is multi-byte safe.
UCASE(str)
UPPER(str)
str with all characters changed to uppercase
according to the current character set mapping (the default is ISO-8859-1
Latin1):
mysql> SELECT UCASE('Hej');
-> 'HEJ'
This function is multi-byte safe.
LOAD_FILE(file_name)
FILE privilege. The file must
be readable by all and be smaller than max_allowed_packet.
If the file doesn't exist or can't be read due to one of the above reasons,
the function returns NULL:
mysql> UPDATE tbl_name
SET blob_column=LOAD_FILE("/tmp/picture")
WHERE id=1;
If you are not using MySQL Version 3.23, you have to do the reading
of the file inside your application and create an INSERT statement
to update the database with the file information. One way to do this, if
you are using the MySQL++ library, can be found at
http://www.mysql.com/documentation/mysql++/mysql++-examples.html.
QUOTE(str)
NULL, the return value is the word ``NULL'' without surrounding
single quotes.
The QUOTE function was added in MySQL version 4.0.3.
mysql> SELECT QUOTE("Don't");
-> 'Don\'t!'
mysql> SELECT QUOTE(NULL);
-> NULL
MySQL automatically converts numbers to strings as necessary, and vice-versa:
mysql> SELECT 1+"1";
-> 2
mysql> SELECT CONCAT(2,' test');
-> '2 test'
If you want to convert a number to a string explicitly, pass it as the
argument to CONCAT().
If a string function is given a binary string as an argument, the resulting string is also a binary string. A number converted to a string is treated as a binary string. This only affects comparisons.
Normally, if any expression in a string comparison is case-sensitive, the comparison is performed in case-sensitive fashion.
expr LIKE pat [ESCAPE 'escape-char']
1 (TRUE) or 0
(FALSE). With LIKE you can use the following two wildcard characters
in the pattern:
| Char | Description |
% | Matches any number of characters, even zero characters |
_ | Matches exactly one character |
mysql> SELECT 'David!' LIKE 'David_';
-> 1
mysql> SELECT 'David!' LIKE '%D%v%';
-> 1
To test for literal instances of a wildcard character, precede the character
with the escape character. If you don't specify the ESCAPE character,
`\' is assumed:
| String | Description |
\% | Matches one % character
|
\_ | Matches one _ character
|
mysql> SELECT 'David!' LIKE 'David\_';
-> 0
mysql> SELECT 'David_' LIKE 'David\_';
-> 1
To specify a different escape character, use the ESCAPE clause:
mysql> SELECT 'David_' LIKE 'David|_' ESCAPE '|';
-> 1
The following two statements illustrate that string comparisons are
case-insensitive unless one of the operands is a binary string:
mysql> SELECT 'abc' LIKE 'ABC';
-> 1
mysql> SELECT 'abc' LIKE BINARY 'ABC';
-> 0
LIKE is allowed on numeric expressions! (This is a MySQL
extension to the SQL-99 LIKE.)
mysql> SELECT 10 LIKE '1%';
-> 1
Note: Because MySQL uses the C escape syntax in strings (for example,
`\n'), you must double any `\' that you use in your LIKE
strings. For example, to search for `\n', specify it as `\\n'. To
search for `\', specify it as `\\\\' (the backslashes are stripped
once by the parser and another time when the pattern match is done, leaving
a single backslash to be matched).
Note: Currently LIKE is not multi-byte character safe.
Comparison is done character by character.
expr NOT LIKE pat [ESCAPE 'escape-char']
NOT (expr LIKE pat [ESCAPE 'escape-char']).
expr SOUNDS LIKE expr
SOUNDEX(expr)=SOUNDEX(expr) (available only in version 4.1 or later).
expr REGEXP pat
expr RLIKE pat
expr against a pattern
pat. The pattern can be an extended regular expression.
See section G MySQL Regular Expressions. Returns 1 if expr matches pat, otherwise
returns 0. RLIKE is a synonym for REGEXP, provided for
mSQL compatibility. Note: Because MySQL uses the C escape
syntax in strings (for example, `\n'), you must double any `\' that
you use in your REGEXP strings. As of MySQL Version 3.23.4,
REGEXP is case-insensitive for normal (not binary) strings:
mysql> SELECT 'Monty!' REGEXP 'm%y%%';
-> 0
mysql> SELECT 'Monty!' REGEXP '.*';
-> 1
mysql> SELECT 'new*\n*line' REGEXP 'new\\*.\\*line';
-> 1
mysql> SELECT "a" REGEXP "A", "a" REGEXP BINARY "A";
-> 1 0
mysql> SELECT "a" REGEXP "^[a-d]";
-> 1
REGEXP and RLIKE use the current character set (ISO-8859-1
Latin1 by default) when deciding the type of a character.
expr NOT REGEXP pat
expr NOT RLIKE pat
NOT (expr REGEXP pat).
STRCMP(expr1,expr2)
STRCMP()
returns 0 if the strings are the same, -1 if the first
argument is smaller than the second according to the current sort order,
and 1 otherwise:
mysql> SELECT STRCMP('text', 'text2');
-> -1
mysql> SELECT STRCMP('text2', 'text');
-> 1
mysql> SELECT STRCMP('text', 'text');
-> 0
MATCH (col1,col2,...) AGAINST (expr)
MATCH (col1,col2,...) AGAINST (expr IN BOOLEAN MODE)
MATCH ... AGAINST() is used for full-text search and returns
relevance - similarity measure between the text in columns
(col1,col2,...) and the query expr. Relevance is a
positive floating-point number. Zero relevance means no similarity.
MATCH ... AGAINST() is available in MySQL version
3.23.23 or later. IN BOOLEAN MODE extension was added in version
4.0.1. For details and usage examples, see section 6.8 MySQL Full-text Search.
BINARY
BINARY operator casts the string following it to a binary string.
This is an easy way to force a column comparison to be case-sensitive even
if the column isn't defined as BINARY or BLOB:
mysql> SELECT "a" = "A";
-> 1
mysql> SELECT BINARY "a" = "A";
-> 0
BINARY string is a shorthand for CAST(string AS BINARY).
See section 6.3.5 Cast Functions.
BINARY was introduced in MySQL Version 3.23.0.
Note that in some context MySQL will not be able to use the
index efficiently when you cast an indexed column to BINARY.
If you want to compare a blob case-insensitively you can always convert the blob to upper case before doing the comparison:
SELECT 'A' LIKE UPPER(blob_col) FROM table_name;
We plan to soon introduce casting between different character sets to make string comparison even more flexible.
The usual arithmetic operators are available. Note that in the case of
`-', `+', and `*', the result is calculated with
BIGINT (64-bit) precision if both arguments are integers!
If one of the argument is an unsigned integer, and the other argument
is also an integer, the result will be an unsigned integer.
See section 6.3.5 Cast Functions.
+
mysql> SELECT 3+5;
-> 8
-
mysql> SELECT 3-5;
-> -2
*
mysql> SELECT 3*5;
-> 15
mysql> SELECT 18014398509481984*18014398509481984.0;
-> 324518553658426726783156020576256.0
mysql> SELECT 18014398509481984*18014398509481984;
-> 0
The result of the last expression is incorrect because the result of the
integer multiplication exceeds the 64-bit range of BIGINT
calculations.
/
mysql> SELECT 3/5;
-> 0.60
Division by zero produces a NULL result:
mysql> SELECT 102/(1-1);
-> NULL
A division will be calculated with BIGINT arithmetic only if performed
in a context where its result is converted to an integer!
All mathematical functions return NULL in case of an error.
-
mysql> SELECT - 2;
-> -2
Note that if this operator is used with a BIGINT, the return value is a
BIGINT! This means that you should avoid using - on integers that
may have the value of -2^63!
ABS(X)
X:
mysql> SELECT ABS(2);
-> 2
mysql> SELECT ABS(-32);
-> 32
This function is safe to use with BIGINT values.
SIGN(X)
-1, 0, or 1, depending
on whether X is negative, zero, or positive:
mysql> SELECT SIGN(-32);
-> -1
mysql> SELECT SIGN(0);
-> 0
mysql> SELECT SIGN(234);
-> 1
MOD(N,M)
%
% operator in C).
Returns the remainder of N divided by M:
mysql> SELECT MOD(234, 10);
-> 4
mysql> SELECT 253 % 7;
-> 1
mysql> SELECT MOD(29,9);
-> 2
mysql> SELECT 29 MOD 9;
-> 2
This function is safe to use with BIGINT values.
The last example only works in MySQL 4.1
FLOOR(X)
X:
mysql> SELECT FLOOR(1.23);
-> 1
mysql> SELECT FLOOR(-1.23);
-> -2
Note that the return value is converted to a BIGINT!
CEILING(X)
X:
mysql> SELECT CEILING(1.23);
-> 2
mysql> SELECT CEILING(-1.23);
-> -1
Note that the return value is converted to a BIGINT!
ROUND(X)
ROUND(X,D)
X, rounded to the nearest integer.
With two arguments rounded to a number to D decimals.
mysql> SELECT ROUND(-1.23);
-> -1
mysql> SELECT ROUND(-1.58);
-> -2
mysql> SELECT ROUND(1.58);
-> 2
mysql> SELECT ROUND(1.298, 1);
-> 1.3
mysql> SELECT ROUND(1.298, 0);
-> 1
mysql> SELECT ROUND(23.298, -1);
-> 20
Note that the behaviour of ROUND() when the argument
is half way between two integers depends on the C library
implementation. Some round to the nearest even number,
always up, always down, or always toward zero. If you need
one kind of rounding, you should use a well-defined function
like TRUNCATE() or FLOOR() instead.
DIV
FLOOR() but safe with BIGINT values.
mysql> SELECT 5 DIV 2 -> 2
DIV is new in MySQL 4.1.0.
EXP(X)
e (the base of natural logarithms) raised to
the power of X:
mysql> SELECT EXP(2);
-> 7.389056
mysql> SELECT EXP(-2);
-> 0.135335
LN(X)
X:
mysql> SELECT LN(2);
-> 0.693147
mysql> SELECT LN(-2);
-> NULL
This function was added in MySQL version 4.0.3.
It is synonymous with LOG(X) in MySQL.
LOG(X)
LOG(B,X)
X:
mysql> SELECT LOG(2);
-> 0.693147
mysql> SELECT LOG(-2);
-> NULL
If called with two parameters, this function returns the logarithm of
X for an arbitary base B:
mysql> SELECT LOG(2,65536);
-> 16.000000
mysql> SELECT LOG(1,100);
-> NULL
The arbitrary base option was added in MySQL version 4.0.3.
LOG(B,X) is equivalent to LOG(X)/LOG(B).
LOG2(X)
X:
mysql> SELECT LOG2(65536);
-> 16.000000
mysql> SELECT LOG2(-100);
-> NULL
LOG2() is useful for finding out how many bits a number would
require for storage.
This function was added in MySQL version 4.0.3.
In earlier versions, you can use LOG(X)/LOG(2) instead.
LOG10(X)
X:
mysql> SELECT LOG10(2);
-> 0.301030
mysql> SELECT LOG10(100);
-> 2.000000
mysql> SELECT LOG10(-100);
-> NULL
POW(X,Y)
POWER(X,Y)
X raised to the power of Y:
mysql> SELECT POW(2,2);
-> 4.000000
mysql> SELECT POW(2,-2);
-> 0.250000
SQRT(X)
X:
mysql> SELECT SQRT(4);
-> 2.000000
mysql> SELECT SQRT(20);
-> 4.472136
PI()
mysql> SELECT PI();
-> 3.141593
mysql> SELECT PI()+0.000000000000000000;
-> 3.141592653589793116
COS(X)
X, where X is given in radians:
mysql> SELECT COS(PI());
-> -1.000000
SIN(X)
X, where X is given in radians:
mysql> SELECT SIN(PI());
-> 0.000000
TAN(X)
X, where X is given in radians:
mysql> SELECT TAN(PI()+1);
-> 1.557408
ACOS(X)
X, that is, the value whose cosine is
X. Returns NULL if X is not in the range -1 to
1:
mysql> SELECT ACOS(1);
-> 0.000000
mysql> SELECT ACOS(1.0001);
-> NULL
mysql> SELECT ACOS(0);
-> 1.570796
ASIN(X)
X, that is, the value whose sine is
X. Returns NULL if X is not in the range -1 to
1:
mysql> SELECT ASIN(0.2);
-> 0.201358
mysql> SELECT ASIN('foo');
-> 0.000000
ATAN(X)
X, that is, the value whose tangent is
X:
mysql> SELECT ATAN(2);
-> 1.107149
mysql> SELECT ATAN(-2);
-> -1.107149
ATAN(Y,X)
ATAN2(Y,X)
X and Y. It is
similar to calculating the arc tangent of Y / X, except that the
signs of both arguments are used to determine the quadrant of the
result:
mysql> SELECT ATAN(-2,2);
-> -0.785398
mysql> SELECT ATAN2(PI(),0);
-> 1.570796
COT(X)
X:
mysql> SELECT COT(12);
-> -1.57267341
mysql> SELECT COT(0);
-> NULL
RAND()
RAND(N)
0 to 1.0.
If an integer argument N is specified, it is used as the seed value
(producing a repeatable sequence):
mysql> SELECT RAND();
-> 0.9233482386203
mysql> SELECT RAND(20);
-> 0.15888261251047
mysql> SELECT RAND(20);
-> 0.15888261251047
mysql> SELECT RAND();
-> 0.63553050033332
mysql> SELECT RAND();
-> 0.70100469486881
You can't use a column with RAND() values in an ORDER BY
clause, because ORDER BY would evaluate the column multiple times.
From version 3.23 you can do:
SELECT * FROM table_name ORDER BY RAND()
This is useful to get a random sample of a set SELECT * FROM
table1,table2 WHERE a=b AND c<d ORDER BY RAND() LIMIT 1000.
Note that a RAND() in a WHERE clause will be re-evaluated
every time the WHERE is executed.
RAND() is not meant to be a perfect random generator, but instead a
fast way to generate ad hoc random numbers that will be portable between
platforms for the same MySQL version.
LEAST(X,Y,...)
INTEGER context, or all arguments
are integer-valued, they are compared as integers.
REAL context, or all arguments are
real-valued, they are compared as reals.
mysql> SELECT LEAST(2,0);
-> 0
mysql> SELECT LEAST(34.0,3.0,5.0,767.0);
-> 3.0
mysql> SELECT LEAST("B","A","C");
-> "A"
In MySQL versions prior to Version 3.22.5, you can use MIN()
instead of LEAST.
GREATEST(X,Y,...)
LEAST:
mysql> SELECT GREATEST(2,0);
-> 2
mysql> SELECT GREATEST(34.0,3.0,5.0,767.0);
-> 767.0
mysql> SELECT GREATEST("B","A","C");
-> "C"
In MySQL versions prior to Version 3.22.5, you can use MAX()
instead of GREATEST.
DEGREES(X)
X, converted from radians to degrees:
mysql> SELECT DEGREES(PI());
-> 180.000000
RADIANS(X)
X, converted from degrees to radians:
mysql> SELECT RADIANS(90);
-> 1.570796
TRUNCATE(X,D)
X, truncated to D decimals. If D
is 0, the result will have no decimal point or fractional part:
mysql> SELECT TRUNCATE(1.223,1);
-> 1.2
mysql> SELECT TRUNCATE(1.999,1);
-> 1.9
mysql> SELECT TRUNCATE(1.999,0);
-> 1
mysql> SELECT TRUNCATE(-1.999,1);
-> -1.9
Starting from MySQL 3.23.51 all numbers are rounded towards zero.
If D is negative, then the whole part of the number is zeroed out:
mysql> SELECT TRUNCATE(122,-2);
-> 100
Note that as decimal numbers are normally not stored as exact numbers in
computers, but as double values, you may be fooled by the following
result:
mysql> SELECT TRUNCATE(10.28*100,0);
-> 1027
The above happens because 10.28 is actually stored as something like
10.2799999999999999.
See section 6.2.2 Date and Time Types for a description of the range of values each type has and the valid formats in which date and time values may be specified.
Here is an example that uses date functions. The following query selects
all records with a date_col value from within the last 30 days:
mysql> SELECT something FROM tbl_name
WHERE TO_DAYS(NOW()) - TO_DAYS(date_col) <= 30;
DAYOFWEEK(date)
date (1 = Sunday, 2 = Monday, ... 7 =
Saturday). These index values correspond to the ODBC standard.
mysql> SELECT DAYOFWEEK('1998-02-03');
-> 3
WEEKDAY(date)
date (0 = Monday, 1 = Tuesday, ... 6 = Sunday):
mysql> SELECT WEEKDAY('1998-02-03 22:23:00');
-> 1
mysql> SELECT WEEKDAY('1997-11-05');
-> 2
DAYOFMONTH(date)
date, in the range 1 to
31:
mysql> SELECT DAYOFMONTH('1998-02-03');
-> 3
DAYOFYEAR(date)
date, in the range 1 to
366:
mysql> SELECT DAYOFYEAR('1998-02-03');
-> 34
MONTH(date)
date, in the range 1 to 12:
mysql> SELECT MONTH('1998-02-03');
-> 2
DAYNAME(date)
date:
mysql> SELECT DAYNAME("1998-02-05");
-> 'Thursday'
MONTHNAME(date)
date:
mysql> SELECT MONTHNAME("1998-02-05");
-> 'February'
QUARTER(date)
date, in the range 1
to 4:
mysql> SELECT QUARTER('98-04-01');
-> 2
WEEK(date)
WEEK(date,first)
date, in the range
0 to 53 (yes, there may be the beginnings of a week 53),
for locations where Sunday is the first day of the week. The
two-argument form of WEEK() allows you to specify whether the
week starts on Sunday or Monday and whether the return value should be in
the range 0-53 or 1-52.
Here is a table for how the second argument works:
| Value | Meaning
|
| 0 | Week starts on Sunday and return value is in range 0-53 |
| 1 | Week starts on Monday and return value is in range 0-53 |
| 2 | Week starts on Sunday and return value is in range 1-53 |
| 3 | Week starts on Monday and return value is in range 1-53 (ISO 8601) |
mysql> SELECT WEEK('1998-02-20');
-> 7
mysql> SELECT WEEK('1998-02-20',0);
-> 7
mysql> SELECT WEEK('1998-02-20',1);
-> 8
mysql> SELECT WEEK('1998-12-31',1);
-> 53
For MySQL 3.23 and 4.0 the default value for the second argument is 0.
In MySQL 4.1 You can set the default value of the second argument by a
variable default_week_format. The syntax of default_week_format is:
SET [SESSION | GLOBAL] default_week_format = [0|1|2|3];
Note: in Version 4.0, WEEK(#,0) was changed to match the
calendar in the USA. Before that WEEK() was calculated wrong
for dates in USA. (In effect WEEK(#) and WEEK(#,0) was wrong for all
cases).
Note that if a week is the last week of the previous year, MySQL will
return 0 if you don't use 2 or 3 as the optional argument:
mysql> SELECT YEAR('2000-01-01'), WEEK('2000-01-01',0);
-> 2000, 0
mysql> SELECT WEEK('2000-01-01',2);
-> 52
One could argue that MySQL should return 52 for the WEEK()
function as the given date is actually the 52 second week of 1999. We
decided to return 0 instead as we want the function to return 'the week
number in the given year'. This makes the usage of the WEEK()
function reliable when combined with other functions that extracts a
date part from a date.
If you would prefer to know the correct year-week, then you should use
the 2 or 3 as the optional argument or use the YEARWEEK()
function:
mysql> SELECT YEARWEEK('2000-01-01');
-> 199952
mysql> SELECT MID(YEARWEEK('2000-01-01'),5,2);
-> 52
YEAR(date)
date, in the range 1000 to 9999:
mysql> SELECT YEAR('98-02-03');
-> 1998
YEARWEEK(date)
YEARWEEK(date,first)
WEEK(). Note that the year may be
different from the year in the date argument for the first and the last
week of the year
:
mysql> SELECT YEARWEEK('1987-01-01');
-> 198653
Note that the week number is different from what the WEEK()
function would return (0) for optional arguments 0 or 1,
as WEEK() then returns the week in the context of the given year.
HOUR(time)
time, in the range 0 to 23:
mysql> SELECT HOUR('10:05:03');
-> 10
MINUTE(time)
time, in the range 0 to 59:
mysql> SELECT MINUTE('98-02-03 10:05:03');
-> 5
SECOND(time)
time, in the range 0 to 59:
mysql> SELECT SECOND('10:05:03');
-> 3
PERIOD_ADD(P,N)
N months to period P (in the format YYMM or
YYYYMM). Returns a value in the format YYYYMM.
Note that the period argument P is not a date value:
mysql> SELECT PERIOD_ADD(9801,2);
-> 199803
PERIOD_DIFF(P1,P2)
P1 and P2.
P1 and P2 should be in the format YYMM or YYYYMM.
Note that the period arguments P1 and P2 are not
date values:
mysql> SELECT PERIOD_DIFF(9802,199703);
-> 11
DATE_ADD(date,INTERVAL expr type)
DATE_SUB(date,INTERVAL expr type)
ADDDATE(date,INTERVAL expr type)
SUBDATE(date,INTERVAL expr type)
ADDDATE() and
SUBDATE() are synonyms for DATE_ADD() and
DATE_SUB().
In MySQL Version 3.23, you can use + and - instead of
DATE_ADD() and DATE_SUB() if the expression on the right side is
a date or datetime column. (See example below.)
date is a DATETIME or DATE value specifying the starting
date. expr is an expression specifying the interval value to be added
or subtracted from the starting date. expr is a string; it may start
with a `-' for negative intervals. type is a keyword indicating
how the expression should be interpreted.
The following table shows how the type and expr arguments
are related:
type value | Expected expr format
|
SECOND | SECONDS
|
MINUTE | MINUTES
|
HOUR | HOURS
|
DAY | DAYS
|
MONTH | MONTHS
|
YEAR | YEARS
|
MINUTE_SECOND | "MINUTES:SECONDS"
|
HOUR_MINUTE | "HOURS:MINUTES"
|
DAY_HOUR | "DAYS HOURS"
|
YEAR_MONTH | "YEARS-MONTHS"
|
HOUR_SECOND | "HOURS:MINUTES:SECONDS"
|
DAY_MINUTE | "DAYS HOURS:MINUTES"
|
DAY_SECOND | "DAYS HOURS:MINUTES:SECONDS"
|
expr format.
Those shown in the table are the suggested delimiters. If the date
argument is a DATE value and your calculations involve only
YEAR, MONTH, and DAY parts (that is, no time parts), the
result is a DATE value. Otherwise, the result is a DATETIME
value:
mysql> SELECT "1997-12-31 23:59:59" + INTERVAL 1 SECOND;
-> 1998-01-01 00:00:00
mysql> SELECT INTERVAL 1 DAY + "1997-12-31";
-> 1998-01-01
mysql> SELECT "1998-01-01" - INTERVAL 1 SECOND;
-> 1997-12-31 23:59:59
mysql> SELECT DATE_ADD("1997-12-31 23:59:59",
-> INTERVAL 1 SECOND);
-> 1998-01-01 00:00:00
mysql> SELECT DATE_ADD("1997-12-31 23:59:59",
-> INTERVAL 1 DAY);
-> 1998-01-01 23:59:59
mysql> SELECT DATE_ADD("1997-12-31 23:59:59",
-> INTERVAL "1:1" MINUTE_SECOND);
-> 1998-01-01 00:01:00
mysql> SELECT DATE_SUB("1998-01-01 00:00:00",
-> INTERVAL "1 1:1:1" DAY_SECOND);
-> 1997-12-30 22:58:59
mysql> SELECT DATE_ADD("1998-01-01 00:00:00",
-> INTERVAL "-1 10" DAY_HOUR);
-> 1997-12-30 14:00:00
mysql> SELECT DATE_SUB("1998-01-02", INTERVAL 31 DAY);
-> 1997-12-02
If you specify an interval value that is too short (does not include all the
interval parts that would be expected from the type keyword),
MySQL assumes you have left out the leftmost parts of the interval
value. For example, if you specify a type of DAY_SECOND, the
value of expr is expected to have days, hours, minutes, and seconds
parts. If you specify a value like "1:10", MySQL assumes
that the days and hours parts are missing and the value represents minutes
and seconds. In other words, "1:10" DAY_SECOND is interpreted in such
a way that it is equivalent to "1:10" MINUTE_SECOND. This is
analogous to the way that MySQL interprets TIME values
as representing elapsed time rather than as time of day.
Note that if you add or subtract a date value against something that
contains a time part, the date value will be automatically converted to a
datetime value:
mysql> SELECT DATE_ADD("1999-01-01", INTERVAL 1 DAY);
-> 1999-01-02
mysql> SELECT DATE_ADD("1999-01-01", INTERVAL 1 HOUR);
-> 1999-01-01 01:00:00
If you use really incorrect dates, the result is NULL. If you add
MONTH, YEAR_MONTH, or YEAR and the resulting date
has a day that is larger than the maximum day for the new month, the day is
adjusted to the maximum days in the new month:
mysql> SELECT DATE_ADD('1998-01-30', INTERVAL 1 MONTH);
-> 1998-02-28
Note from the preceding example that the word INTERVAL and the
type keyword are not case-sensitive.
EXTRACT(type FROM date)
EXTRACT() function uses the same kinds of interval type
specifiers as DATE_ADD() or DATE_SUB(), but extracts parts
from the date rather than performing date arithmetic.
mysql> SELECT EXTRACT(YEAR FROM "1999-07-02");
-> 1999
mysql> SELECT EXTRACT(YEAR_MONTH FROM "1999-07-02 01:02:03");
-> 199907
mysql> SELECT EXTRACT(DAY_MINUTE FROM "1999-07-02 01:02:03");
-> 20102
TO_DAYS(date)
date, returns a daynumber (the number of days since year
0):
mysql> SELECT TO_DAYS(950501);
-> 728779
mysql> SELECT TO_DAYS('1997-10-07');
-> 729669
TO_DAYS() is not intended for use with values that precede the advent
of the Gregorian calendar (1582), because it doesn't take into account the
days that were lost when the calendar was changed.
FROM_DAYS(N)
N, returns a DATE value:
mysql> SELECT FROM_DAYS(729669);
-> '1997-10-07'
FROM_DAYS() is not intended for use with values that precede the
advent of the Gregorian calendar (1582), because it doesn't take into account
the days that were lost when the calendar was changed.
DATE_FORMAT(date,format)
date value according to the format string. The
following specifiers may be used in the format string:
| Specifier | Description |
%M | Month name (January..December)
|
%W | Weekday name (Sunday..Saturday)
|
%D | Day of the month with English suffix (0th, 1st, 2nd, 3rd, etc.)
|
%Y | Year, numeric, 4 digits |
%y | Year, numeric, 2 digits |
%X | Year for the week where Sunday is the first day of the week, numeric, 4 digits, used with '%V' |
%x | Year for the week, where Monday is the first day of the week, numeric, 4 digits, used with '%v' |
%a | Abbreviated weekday name (Sun..Sat)
|
%d | Day of the month, numeric (00..31)
|
%e | Day of the month, numeric (0..31)
|
%m | Month, numeric (00..12)
|
%c | Month, numeric (0..12)
|
%b | Abbreviated month name (Jan..Dec)
|
%j | Day of year (001..366)
|
%H | Hour (00..23)
|
%k | Hour (0..23)
|
%h | Hour (01..12)
|
%I | Hour (01..12)
|
%l | Hour (1..12)
|
%i | Minutes, numeric (00..59)
|
%r | Time, 12-hour (hh:mm:ss [AP]M)
|
%T | Time, 24-hour (hh:mm:ss)
|
%S | Seconds (00..59)
|
%s | Seconds (00..59)
|
%p | AM or PM
|
%w | Day of the week (0=Sunday..6=Saturday)
|
%U | Week (00..53), where Sunday is the first day of the week
|
%u | Week (00..53), where Monday is the first day of the week
|
%V | Week (01..53), where Sunday is the first day of the week. Used with '%X'
|
%v | Week (01..53), where Monday is the first day of the week. Used with '%x'
|
%% | A literal `%'. |
mysql> SELECT DATE_FORMAT('1997-10-04 22:23:00', '%W %M %Y');
-> 'Saturday October 1997'
mysql> SELECT DATE_FORMAT('1997-10-04 22:23:00', '%H:%i:%s');
-> '22:23:00'
mysql> SELECT DATE_FORMAT('1997-10-04 22:23:00',
'%D %y %a %d %m %b %j');
-> '4th 97 Sat 04 10 Oct 277'
mysql> SELECT DATE_FORMAT('1997-10-04 22:23:00',
'%H %k %I %r %T %S %w');
-> '22 22 10 10:23:00 PM 22:23:00 00 6'
mysql> SELECT DATE_FORMAT('1999-01-01', '%X %V');
-> '1998 52'
As of MySQL Version 3.23, the `%' character is required before
format specifier characters. In earlier versions of MySQL,
`%' was optional.
The reason the ranges for the month and day specifiers begin with zero
is that MySQL allows incomplete dates such as '2004-00-00' to be
stored as of MySQL 3.23.
TIME_FORMAT(time,format)
DATE_FORMAT() function above, but the
format string may contain only those format specifiers that handle
hours, minutes, and seconds. Other specifiers produce a NULL value or
0.
CURDATE()
CURRENT_DATE
'YYYY-MM-DD' or YYYYMMDD
format, depending on whether the function is used in a string or numeric
context:
mysql> SELECT CURDATE();
-> '1997-12-15'
mysql> SELECT CURDATE() + 0;
-> 19971215
CURTIME()
CURRENT_TIME
'HH:MM:SS' or HHMMSS
format, depending on whether the function is used in a string or numeric
context:
mysql> SELECT CURTIME();
-> '23:50:26'
mysql> SELECT CURTIME() + 0;
-> 235026
NOW()
SYSDATE()
CURRENT_TIMESTAMP
'YYYY-MM-DD HH:MM:SS'
or YYYYMMDDHHMMSS format, depending on whether the function is used in
a string or numeric context:
mysql> SELECT NOW();
-> '1997-12-15 23:50:26'
mysql> SELECT NOW() + 0;
-> 19971215235026
Note that NOW() is only evaluated once per query, namely at the
start of query execution. This means that multiple references to
NOW() within a single query will always give the same time.
UNIX_TIMESTAMP()
UNIX_TIMESTAMP(date)
'1970-01-01 00:00:00' GMT) as an unsigned integer. If
UNIX_TIMESTAMP() is called with a date argument, it
returns the value of the argument as seconds since '1970-01-01
00:00:00' GMT. date may be a DATE string, a
DATETIME string, a TIMESTAMP, or a number in the format
YYMMDD or YYYYMMDD in local time:
mysql> SELECT UNIX_TIMESTAMP();
-> 882226357
mysql> SELECT UNIX_TIMESTAMP('1997-10-04 22:23:00');
-> 875996580
When UNIX_TIMESTAMP is used on a TIMESTAMP column, the function
will return the internal timestamp value directly, with no implicit
``string-to-unix-timestamp'' conversion.
If you pass an out-of-range date to UNIX_TIMESTAMP() it will
return 0, but please note that only basic checking is performed
(year 1970-2037, month 01-12, day 01-31).
If you want to subtract UNIX_TIMESTAMP() columns, you may want to
cast the result to signed integers. See section 6.3.5 Cast Functions.
FROM_UNIXTIME(unix_timestamp [,format])
unix_timestamp argument as a value in
'YYYY-MM-DD HH:MM:SS' or YYYYMMDDHHMMSS format, depending on
whether the function is used in a string or numeric context:
If format is given the restult is formatted according to the
format string. format may contain the same specifiers as
those listed in the entry for the DATE_FORMAT() function
mysql> SELECT FROM_UNIXTIME(875996580);
-> '1997-10-04 22:23:00'
mysql> SELECT FROM_UNIXTIME(875996580) + 0;
-> 19971004222300
mysql> SELECT FROM_UNIXTIME(UNIX_TIMESTAMP(),
'%Y %D %M %h:%i:%s %x');
-> '1997 23rd December 03:43:30 1997'
SEC_TO_TIME(seconds)
seconds argument, converted to hours, minutes, and seconds,
as a value in 'HH:MM:SS' or HHMMSS format, depending on whether
the function is used in a string or numeric context:
mysql> SELECT SEC_TO_TIME(2378);
-> '00:39:38'
mysql> SELECT SEC_TO_TIME(2378) + 0;
-> 3938
TIME_TO_SEC(time)
time argument, converted to seconds:
mysql> SELECT TIME_TO_SEC('22:23:00');
-> 80580
mysql> SELECT TIME_TO_SEC('00:39:38');
-> 2378
The syntax of the CAST function is:
CAST(expression AS type) or CONVERT(expression,type)
Where type is one of:
BINARY
CHAR (New in 4.0.6)
DATE
DATETIME
SIGNED {INTEGER}
TIME
UNSIGNED {INTEGER}
CAST() is SQL-99 syntax and CONVERT() is ODBC syntax.
The cast function is mainly useful when you want to create a column with
a specific type in a CREATE ... SELECT
:
CREATE TABLE new_table SELECT CAST('2000-01-01' AS DATE);
CAST(string AS BINARY is the same thing as BINARY string.
CAST(expr AS CHAR threats expression to be a string with the
default character set.
NOTE: In MysQL 4.0 the CAST to DATE,
DATETIME and TIME only marks the column to be a specific
type but doesn't change the value of the column.
In MySQL 4.1.0 the value will be converted to the correct column when it's sent to the user:
mysql> SELECT CAST(NOW() AS date);
-> 2003-05-26
You should not use CAST to extract data in different formats but
instead use string functions like LEFT or
EXTRACT(). See section 6.3.4 Date and Time Functions.
To cast a string to a numeric value, you don't normally have to do anything; just use the string value as it would be a number:
mysql> SELECT 1+'1';
-> 2
If you use a number in string context the number will automatically be
converted to a BINARY string.
mysql> SELECT CONCAT("hello you ",2);
-> "hello you 2"
MySQL supports arithmetic with both signed and unsigned 64-bit values.
If you are using an numerical operations (like +) and one of the
operands are unsigned integer, then the result will be unsigned.
You can override this by using the SIGNED and UNSIGNED
cast operators, which will cast the operation to a signed or
unsigned 64-bit integer, respectively.
mysql> SELECT CAST(1-2 AS UNSIGNED)
-> 18446744073709551615
mysql> SELECT CAST(CAST(1-2 AS UNSIGNED) AS SIGNED);
-> -1
Note that if either operation is a floating-point value (In this context
DECIMAL() is regarded as a floating-point value) the result will
be a floating-point value and is not affected by the above rule.
mysql> SELECT CAST(1 AS UNSIGNED) -2.0
-> -1.0
If you are using a string in an arithmetic operation, this is converted to a floating-point number.
The CAST() and CONVERT() functions were added in MySQL 4.0.2.
The handing of unsigned values was changed in MySQL 4.0 to be able to
support BIGINT values properly. If you have some code that you
want to run in both MySQL 4.0 and 3.23 (in which case you probably can't
use the CAST function), you can use the following trick to get a signed
result when subtracting two unsigned integer columns:
SELECT (unsigned_column_1+0.0)-(unsigned_column_2+0.0);
The idea is that the columns are converted to floating-point before doing the subtraction.
If you get a problem with UNSIGNED columns in your old MySQL
application when porting to MySQL 4.0, you can use the
--sql-mode=NO_UNSIGNED_SUBTRACTION option when starting
mysqld. Note however that as long as you use this, you will not
be able to make efficient use of the UNSIGNED BIGINT column type.
MySQL uses BIGINT (64-bit) arithmetic for bit operations, so
these operators have a maximum range of 64 bits.
|
mysql> SELECT 29 | 15;
-> 31
The result is an unsigned 64-bit integer.
&
mysql> SELECT 29 & 15;
-> 13
The result is an unsigned 64-bit integer.
^
mysql> SELECT 1 ^ 1;
-> 0
mysql> SELECT 1 ^ 0;
-> 1
mysql> SELECT 11 ^ 3;
-> 8
The result is an unsigned 64-bit integer.
XOR was added in version 4.0.2.
<<
BIGINT) number to the left:
mysql> SELECT 1 << 2;
-> 4
The result is an unsigned 64-bit integer.
>>
BIGINT) number to the right:
mysql> SELECT 4 >> 2;
-> 1
The result is an unsigned 64-bit integer.
~
mysql> SELECT 5 & ~1;
-> 4
The result is an unsigned 64-bit integer.
BIT_COUNT(N)
N:
mysql> SELECT BIT_COUNT(29);
-> 4
DATABASE()
mysql> SELECT DATABASE();
-> 'test'
If there is no current database, DATABASE() returns the empty string.
USER()
SYSTEM_USER()
SESSION_USER()
mysql> SELECT USER();
-> 'davida@localhost'
In MySQL Version 3.22.11 or later, this includes the client hostname
as well as the user name. You can extract just the user name part like this
(which works whether the value includes a hostname part):
mysql> SELECT SUBSTRING_INDEX(USER(),"@",1);
-> 'davida'
CURRENT_USER()
mysql> SELECT USER();
-> 'davida@localhost'
mysql> SELECT * FROM mysql.user;
-> ERROR 1044: Access denied for user: '@localhost' to database 'mysql'
mysql> SELECT CURRENT_USER();
-> '@localhost'
PASSWORD(str)
OLD_PASSWORD(str)
str. This is
the function that is used for encrypting MySQL passwords for storage
in the Password column of the user grant table:
mysql> SELECT PASSWORD('badpwd');
-> '7f84554057dd964b'
PASSWORD() encryption is non-reversible.
PASSWORD() does not perform password encryption in the same way that
Unix passwords are encrypted. See ENCRYPT().
Note:
The PASSWORD() function is used by the authentication system in
MySQL Server, you should NOT use it in your own applications.
For that purpose, use MD5() or SHA1() instead.
Also see RFC-2195 for more information about handling passwords
and authentication securely in your application.
ENCRYPT(str[,salt])
str using the Unix crypt() system call. The
salt argument should be a string with two characters.
(As of MySQL Version 3.22.16, salt may be longer than two characters.):
mysql> SELECT ENCRYPT("hello");
-> 'VxuFAJXVARROc'
If crypt() is not available on your system, ENCRYPT() always
returns NULL.
ENCRYPT() ignores all but the first 8 characters of str, at
least on some systems. This will be determined by the behaviour of the
underlying crypt() system call.
ENCODE(str,pass_str)
str using pass_str as the password.
To decrypt the result, use DECODE().
The results is a binary string of the same length as string.
If you want to save it in a column, use a BLOB column type.
DECODE(crypt_str,pass_str)
crypt_str using pass_str as the
password. crypt_str should be a string returned from
ENCODE().
MD5(string)
mysql> SELECT MD5("testing");
-> 'ae2b1fca515949e5d54fb22b8ed95575'
This is the "RSA Data Security, Inc. MD5 Message-Digest Algorithm".
SHA1(string)
SHA(string)
NULL in case the input argument was NULL.
One of the possible uses for this function is as a hash key. You can
also use it as cryptographically safe function for storing passwords.
mysql> SELECT SHA1("abc");
-> 'a9993e364706816aba3e25717850c26c9cd0d89d'
SHA1() was added in version 4.0.2, and can be considered
a cryptographically more secure equivalent of MD5().
SHA() is synonym for SHA1().
AES_ENCRYPT(string,key_string)
AES_DECRYPT(string,key_string)
NULL,
the result of this function is also NULL.
As AES is a block level algorithm, padding is used to encode uneven length
strings and so the result string length may be calculated as
16*(trunc(string_length/16)+1).
If AES_DECRYPT() detects invalid data or incorrect padding, it
will return NULL. However, it is possible for AES_DECRYPT()
to return a non-NULL value (possibly garbage) if the input data or
the key was invalid.
You can use the AES functions to store data in an encrypted form by
modifying your queries:
INSERT INTO t VALUES (1,AES_ENCRYPT("text","password"));
You can get even more security by avoiding transferring the key over the
connection for each query, which can be accomplished by storing it in a
server side variable at connection time:
SELECT @password:="my password";
INSERT INTO t VALUES (1,AES_ENCRYPT("text",@password));
AES_ENCRYPT() and AES_DECRYPT() were added in version 4.0.2,
and can be considered the most cryptographically secure encryption
functions currently available in MySQL.
DES_ENCRYPT(string_to_encrypt [, (key_number | key_string) ] )
| Argument | Description |
| Only one argument |
The first key from des-key-file is used.
|
| key number |
The given key (0-9) from the des-key-file is used.
|
| string |
The given key_string will be used to crypt string_to_encrypt.
|
CHAR(128 | key_number).
The 128 is added to make it easier to recognise an encrypted key.
If you use a string key, key_number will be 127.
On error, this function returns NULL.
The string length for the result will be
new_length= org_length + (8-(org_length % 8))+1.
The des-key-file has the following format:
key_number des_key_string key_number des_key_stringEach
key_number must be a number in the range from 0 to 9. Lines in
the file may be in any order. des_key_string is the string that
will be used to encrypt the message. Between the number and the key there
should be at least one space. The first key is the default key that will
be used if you don't specify any key argument to DES_ENCRYPT()
You can tell MySQL to read new key values from the key file with the
FLUSH DES_KEY_FILE command. This requires the Reload_priv
privilege.
One benefit of having a set of default keys is that it gives applications
a way to check for existence of encrypted column values, without giving
the end user the right to decrypt those values.
mysql> SELECT customer_address FROM customer_table WHERE
crypted_credit_card = DES_ENCRYPT("credit_card_number");
DES_DECRYPT(string_to_decrypt [, key_string])
DES_ENCRYPT().
Note that this function only works if you have configured MySQL with
SSL support. See section 4.3.9 Using Secure Connections.
If no key_string argument is given, DES_DECRYPT() examines
the first byte of the encrypted string to determine the DES key number
that was used to encrypt the original string, then reads the key
from the des-key-file to decrypt the message. For this to work
the user must have the SUPER privilege.
If you pass this function a key_string argument, that string
is used as the key for decrypting the message.
If the string_to_decrypt doesn't look like an encrypted string, MySQL
will return the given string_to_decrypt.
On error, this function returns NULL.
COMPRESS(string_to_compress)
mysql> select length(compress(repeat("a",1000)));
+------------------------------------+
| length(compress(repeat("a",1000))) |
+------------------------------------+
| 21 |
+------------------------------------+
1 row in set (0.00 sec)
mysql> select length(compress(""));
+----------------------+
| length(compress("")) |
+----------------------+
| 0 |
+----------------------+
1 row in set (0.00 sec)
mysql> select length(compress("a"));
+-----------------------+
| length(compress("a")) |
+-----------------------+
| 13 |
+-----------------------+
1 row in set (0.00 sec)
mysql> select length(compress(repeat("a",16)));
+----------------------------------+
| length(compress(repeat("a",16))) |
+----------------------------------+
| 15 |
+----------------------------------+
1 row in set (0.00 sec)
UNCOMPRESS(string_to_uncompress)
mysql> select uncompress(compress("any string"));
+------------------------------------+
| uncompress(compress("any string")) |
+------------------------------------+
| any string |
+------------------------------------+
1 row in set (0.00 sec)
UNCOMPRESS(compressed_string)
mysql> select uncompressed_length(compress(repeat("a",30)));
+-----------------------------------------------+
| uncompressed_length(compress(repeat("a",30))) |
+-----------------------------------------------+
| 30 |
+-----------------------------------------------+
1 row in set (0.00 sec)
LAST_INSERT_ID([expr])
AUTO_INCREMENT column.
See section 8.1.3.130 mysql_insert_id().
mysql> SELECT LAST_INSERT_ID();
-> 195
The last ID that was generated is maintained in the server on a
per-connection basis. It will not be changed by another client. It will not
even be changed if you update another AUTO_INCREMENT column with a
non-magic value (that is, a value that is not NULL and not 0).
If you insert many rows at the same time with an insert statement,
LAST_INSERT_ID() returns the value for the first inserted row.
The reason for this is to make it possible to easily reproduce
the same INSERT statement against some other server.
If expr is given as an argument to LAST_INSERT_ID(), then
the value of the argument is returned by the function, and is set as the
next value to be returned by LAST_INSERT_ID(). This can be used
to simulate sequences:
First create the table:
mysql> CREATE TABLE sequence (id INT NOT NULL); mysql> INSERT INTO sequence VALUES (0);Then the table can be used to generate sequence numbers like this:
mysql> UPDATE sequence SET id=LAST_INSERT_ID(id+1);You can generate sequences without calling
LAST_INSERT_ID(), but the
utility of using the function this way is that the ID value is maintained in
the server as the last automatically generated value (multi-user safe).
You can retrieve the new ID as you would read any normal
AUTO_INCREMENT value in MySQL. For example, LAST_INSERT_ID()
(without an argument) will return the new ID. The C API function
mysql_insert_id() can also be used to get the value.
Note that as mysql_insert_id() is only updated after INSERT
and UPDATE statements, so you can't use the C API function to
retrieve the value for LAST_INSERT_ID(expr) after executing other
SQL statements like SELECT or SET.
FORMAT(X,D)
X to a format like '#,###,###.##', rounded
to D decimals. If D is 0, the result will have no
decimal point or fractional part:
mysql> SELECT FORMAT(12332.123456, 4);
-> '12,332.1235'
mysql> SELECT FORMAT(12332.1,4);
-> '12,332.1000'
mysql> SELECT FORMAT(12332.2,0);
-> '12,332'
VERSION()
mysql> SELECT VERSION();
-> '3.23.13-log'
Note that if your version ends with -log this means that logging is
enabled.
CONNECTION_ID()
thread_id) for the connection.
Every connection has its own unique id:
mysql> SELECT CONNECTION_ID();
-> 1
GET_LOCK(str,timeout)
str, with a
timeout of timeout seconds. Returns 1 if the lock was obtained
successfully, 0 if the attempt timed out, or NULL if an error
occurred (such as running out of memory or the thread was killed with
mysqladmin kill). A lock is released when you execute
RELEASE_LOCK(), execute a new GET_LOCK(), or the thread
terminates. This function can be used to implement application locks or to
simulate record locks. It blocks requests by other clients for locks with
the same name; clients that agree on a given lock string name can use the
string to perform cooperative advisory locking:
mysql> SELECT GET_LOCK("lock1",10);
-> 1
mysql> SELECT IS_FREE_LOCK("lock2");
-> 1
mysql> SELECT GET_LOCK("lock2",10);
-> 1
mysql> SELECT RELEASE_LOCK("lock2");
-> 1
mysql> SELECT RELEASE_LOCK("lock1");
-> NULL
Note that the second RELEASE_LOCK() call returns NULL because
the lock "lock1" was automatically released by the second
GET_LOCK() call.
RELEASE_LOCK(str)
str that was obtained with
GET_LOCK(). Returns 1 if the lock was released, 0 if the
lock wasn't locked by this thread (in which case the lock is not released),
and NULL if the named lock didn't exist. The lock will not exist if
it was never obtained by a call to GET_LOCK() or if it already has
been released.
The DO statement is convinient to use with RELEASE_LOCK().
See section 6.4.10 DO Syntax.
IS_FREE_LOCK(str)
str is free to use (i.e., not locked).
Returns 1 if the lock is free (no one is using the lock),
0 if the lock is in use, and
NULL on errors (like incorrect arguments).
BENCHMARK(count,expr)
BENCHMARK() function executes the expression expr
repeatedly count times. It may be used to time how fast MySQL
processes the expression. The result value is always 0. The intended
use is in the mysql client, which reports query execution times:
mysql> SELECT BENCHMARK(1000000,ENCODE("hello","goodbye"));
+----------------------------------------------+
| BENCHMARK(1000000,ENCODE("hello","goodbye")) |
+----------------------------------------------+
| 0 |
+----------------------------------------------+
1 row in set (4.74 sec)
The time reported is elapsed time on the client end, not CPU time on the
server end. It may be advisable to execute BENCHMARK() several
times, and interpret the result with regard to how heavily loaded the
server machine is.
INET_NTOA(expr)
mysql> SELECT INET_NTOA(3520061480);
-> "209.207.224.40"
INET_ATON(expr)
mysql> SELECT INET_ATON("209.207.224.40");
-> 3520061480
The generated number is always in network byte order; for example the
above number is calculated as 209*256^3 + 207*256^2 + 224*256 +40.
MASTER_POS_WAIT(log_name, log_pos [, timeout])
NULL. If the slave is not running, will block and wait until it
is started and goes to or past the specified position. If the slave is
already past the specified position, returns immediately.
If timeout (new in 4.0.10) is specified, will give up waiting
when timeout seconds have elapsed. timeout must be greater
than 0; a zero or negative timeout means no timeout. The return
value is the number of log events it had to wait to get to the specified
position, or NULL in case of error, or -1 if the timeout
has been exceeded.
This command is useful for control of master-slave synchronisation.
FOUND_ROWS()
SELECT statement
would have returned, if it had not been restricted with LIMIT.
For FOUND_ROWS() to work correctly following a SELECT
statement that includes a LIMIT clause, the statement must
include the SQL_CALC_FOUND_ROWS option:
mysql> SELECT SQL_CALC_FOUND_ROWS * FROM tbl_name
WHERE id > 100 LIMIT 10;
mysql> SELECT FOUND_ROWS();
The second SELECT will return a number indicating how many rows the
first SELECT would have returned had it been written without the
LIMIT clause.
Note that if you are using SELECT SQL_CALC_FOUND_ROWS ... MySQL has
to calculate all rows in the result set. However, this is faster than
if you would not use LIMIT, as the result set need not be sent
to the client.
If the preceding SELECT statement does not include the
SQL_CALC_FOUND_ROWS option, then FOUND_ROWS() may return
a different result when LIMIT is used than when it is not.
SQL_CALC_FOUND_ROWS and FOUND_ROWS() are available starting at MySQL version 4.0.0.
GROUP BY Clauses
If you use a group function in a statement containing no GROUP BY
clause, it is equivalent to grouping on all rows.
COUNT(expr)
NULL values in the rows
retrieved by a SELECT statement:
mysql> SELECT student.student_name,COUNT(*)
-> FROM student,course
-> WHERE student.student_id=course.student_id
-> GROUP BY student_name;
COUNT(*) is somewhat different in that it returns a count of
the number of rows retrieved, whether they contain NULL
values.
COUNT(*) is optimised to
return very quickly if the SELECT retrieves from one table, no
other columns are retrieved, and there is no WHERE clause.
For example:
mysql> SELECT COUNT(*) FROM student;
COUNT(DISTINCT expr,[expr...])
NULL values:
mysql> SELECT COUNT(DISTINCT results) FROM student;In MySQL you can get the number of distinct expression combinations that don't contain NULL by giving a list of expressions. In SQL-99 you would have to do a concatenation of all expressions inside
COUNT(DISTINCT ...).
AVG(expr)
expr:
mysql> SELECT student_name, AVG(test_score)
-> FROM student
-> GROUP BY student_name;
MIN(expr)
MAX(expr)
expr. MIN() and
MAX() may take a string argument; in such cases they return the
minimum or maximum string value. See section 5.4.3 How MySQL Uses Indexes.
mysql> SELECT student_name, MIN(test_score), MAX(test_score)
-> FROM student
-> GROUP BY student_name;
In MIN(), MAX() and other aggregate functions, MySQL
currently compares ENUM and SET columns by their string
value rather than by the string's relative position in the set.
This will be rectified.
SUM(expr)
expr. Note that if the return set has no rows,
it returns NULL!
GROUP_CONCAT(expr)
GROUP_CONCAT([DISTINCT] expr [,expr ...]
[ORDER BY {unsigned_integer | col_name | formula} [ASC | DESC] [,col ...]]
[SEPARATOR str_val])
This function was added in MySQL version 4.1.
It returns a string result with the concatenated values from a group:
mysql> SELECT student_name,
-> GROUP_CONCAT(test_score)
-> FROM student
-> GROUP BY student_name;
or
mysql> SELECT student_name,
-> GROUP_CONCAT(DISTINCT test_score
-> ORDER BY test_score DESC SEPARATOR " ")
-> FROM student
-> GROUP BY student_name;
In MySQL you can get the concatenated values of expression combinations.
You can eliminate duplicate values by using DISTINCT.
If you want to sort values in the result you should use ORDER BY
clause.
To sort in reverse order, add the DESC (descending) keyword to the
name of the column you are sorting by in the ORDER BY clause. The
default is ascending order; this may be specified explicitly using the
ASC keyword.
SEPARATOR is the string value which should be inserted between
values of result. The default is a comma (`","'). You can remove
the separator altogether by specifying SEPARATOR "".
You can set a maximum allowed length with the variable
group_concat_max_len in your configuration.
The syntax to do this at runtime is:
SET [SESSION | GLOBAL] group_concat_max_len = unsigned_integer;If a maximum length has been set, the result is truncated to this maximum length. The
GROUP_CONCAT() function is an enhanced implementation of
the basic LIST() function supported by Sybase SQL Anywhere.
GROUP_CONCAT() is backward compatible with the extremely limited
functionality of LIST(), if only one column and no other options
are specified. LIST() does have a default sorting order.
VARIANCE(expr)
expr. This is an extension to
SQL-99 (available only in version 4.1 or later).
STD(expr)
STDDEV(expr)
expr. This is an extension to
SQL-99. The STDDEV() form of this function is provided for Oracle
compatibility.
BIT_OR(expr)
OR of all bits in expr. The calculation is
performed with 64-bit (BIGINT) precision.
Function returns 0 if there was no matching rows.
BIT_AND(expr)
AND of all bits in expr. The calculation is
performed with 64-bit (BIGINT) precision.
Function returns -1 if there was no matching rows.
MySQL has extended the use of GROUP BY. You can use columns or
calculations in the SELECT expressions that don't appear in
the GROUP BY part. This stands for any possible value for this
group. You can use this to get better performance by avoiding sorting and
grouping on unnecessary items. For example, you don't need to group on
customer.name in the following query:
mysql> SELECT order.custid,customer.name,MAX(payments)
-> FROM order,customer
-> WHERE order.custid = customer.custid
-> GROUP BY order.custid;
In standard SQL, you would have to add customer.name to the
GROUP BY clause. In MySQL, the name is redundant if you don't run in
ANSI mode.
Don't use this feature if the columns you omit from the
GROUP BY part aren't unique in the group! You will get
unpredictable results.
In some cases, you can use MIN() and MAX() to obtain a specific
column value even if it isn't unique. The following gives the value of
column from the row containing the smallest value in the sort
column:
SUBSTR(MIN(CONCAT(RPAD(sort,6,' '),column)),7)
See section 3.5.4 The Rows Holding the Group-wise Maximum of a Certain Field.
Note that if you are using MySQL Version 3.22 (or earlier) or if
you are trying to follow SQL-99, you can't use expressions in GROUP
BY or ORDER BY clauses. You can work around this limitation by
using an alias for the expression:
mysql> SELECT id,FLOOR(value/100) AS val FROM tbl_name
-> GROUP BY id,val ORDER BY val;
In MySQL Version 3.23 you can do:
mysql> SELECT id,FLOOR(value/100) FROM tbl_name ORDER BY RAND();
SELECT, INSERT, UPDATE, DELETESELECT Syntax
SELECT [STRAIGHT_JOIN]
[SQL_SMALL_RESULT] [SQL_BIG_RESULT] [SQL_BUFFER_RESULT]
[SQL_CACHE | SQL_NO_CACHE] [SQL_CALC_FOUND_ROWS] [HIGH_PRIORITY]
[DISTINCT | DISTINCTROW | ALL]
select_expression,...
[INTO {OUTFILE | DUMPFILE} 'file_name' export_options]
[FROM table_references
[WHERE where_definition]
[GROUP BY {unsigned_integer | col_name | formula} [ASC | DESC], ...]
[HAVING where_definition]
[ORDER BY {unsigned_integer | col_name | formula} [ASC | DESC] ,...]
[LIMIT [offset,] rows | rows OFFSET offset]
[PROCEDURE procedure_name(argument_list)]
[FOR UPDATE | LOCK IN SHARE MODE]]
SELECT is used to retrieve rows selected from one or more tables.
select_expression indicates the columns you want to retrieve.
SELECT may also be used to retrieve rows computed without reference to
any table.
For example:
mysql> SELECT 1 + 1;
-> 2
All keywords used must be given in exactly the order shown above. For example,
a HAVING clause must come after any GROUP BY clause and before
any ORDER BY clause.
SELECT expression may be given an alias using AS. The alias
is used as the expression's column name and can be used with
ORDER BY or HAVING clauses. For example:
mysql> SELECT CONCAT(last_name,', ',first_name) AS full_name
FROM mytable ORDER BY full_name;
WHERE clause,
because the column value may not yet be determined when the
WHERE clause is executed.
See section A.5.4 Problems with alias.
FROM table_references clause indicates the tables from which to
retrieve rows. If you name more than one table, you are performing a
join. For information on join syntax, see section 6.4.1.1 JOIN Syntax.
For each table specified, you may optionally specify an alias.
table_name [[AS] alias] [[USE INDEX (key_list)] | [IGNORE INDEX (key_list)] | FORCE INDEX (key_list)]]As of MySQL Version 3.23.12, you can give hints about which index MySQL should use when retrieving information from a table. This is useful if
EXPLAIN shows that MySQL is
using the wrong index from the list of possible indexes. By specifying
USE INDEX (key_list), you can tell MySQL to use only one of the
possible indexes to find rows in the table. The alternative syntax
IGNORE INDEX (key_list) can be used to tell MySQL to not use some
particular index.
In MySQL 4.0.9 you can also use FORCE INDEX. This acts likes
USE INDEX (key_list) but with the addition that a table scan
is assumed to be VERY expensive. In other words a table scan will
only be used if there is no way to use one of the given index to
find rows in the table.
USE/IGNORE/FORCE KEY are synonyms for USE/IGNORE/FORCE INDEX.
tbl_name (within the current database),
or as dbname.tbl_name to explicitly specify a database.
You can refer to a column as col_name, tbl_name.col_name, or
db_name.tbl_name.col_name. You need not specify a tbl_name or
db_name.tbl_name prefix for a column reference in a SELECT
statement unless the reference would be ambiguous. See section 6.1.2 Database, Table, Index, Column, and Alias Names,
for examples of ambiguity that require the more explicit column reference
forms.
tbl_name [AS] alias_name:
mysql> SELECT t1.name, t2.salary FROM employee AS t1, info AS t2
-> WHERE t1.name = t2.name;
mysql> SELECT t1.name, t2.salary FROM employee t1, info t2
-> WHERE t1.name = t2.name;
ORDER BY and
GROUP BY clauses using column names, column aliases, or column
positions. Column positions begin with 1:
mysql> SELECT college, region, seed FROM tournament
-> ORDER BY region, seed;
mysql> SELECT college, region AS r, seed AS s FROM tournament
-> ORDER BY r, s;
mysql> SELECT college, region, seed FROM tournament
-> ORDER BY 2, 3;
To sort in reverse order, add the DESC (descending) keyword to the
name of the column in the ORDER BY clause that you are sorting by.
The default is ascending order; this may be specified explicitly using
the ASC keyword.
WHERE clause use any of the functions that
MySQL support. See section 6.3 Functions for Use in SELECT and WHERE Clauses.
HAVING clause can refer to any column or alias named in the
select_expression. It is applied last, just before items are sent to
the client, with no optimisation. Don't use HAVING for items that
should be in the WHERE clause. For example, do not write this:
mysql> SELECT col_name FROM tbl_name HAVING col_name > 0;Write this instead:
mysql> SELECT col_name FROM tbl_name WHERE col_name > 0;In MySQL Version 3.22.5 or later, you can also write queries like this:
mysql> SELECT user,MAX(salary) FROM users
-> GROUP BY user HAVING MAX(salary)>10;
In older MySQL versions, you can write this instead:
mysql> SELECT user,MAX(salary) AS sum FROM users
-> group by user HAVING sum>10;
DISTINCT, DISTINCTROW and ALL specify
whether duplicate rows should be returned. The default is (ALL),
all matching rows are returned. DISTINCT and DISTINCTROW
are synonyms and specify that duplicate rows in the result set should
be removed.
SQL_, STRAIGHT_JOIN, and
HIGH_PRIORITY are MySQL extensions to SQL-99.
HIGH_PRIORITY will give the SELECT higher priority than
a statement that updates a table. You should only use this for queries
that are very fast and must be done at once. A SELECT HIGH_PRIORITY
query will run if the table is locked for read even if there is an update
statement that is waiting for the table to be free.
SQL_BIG_RESULT can be used with GROUP BY or DISTINCT
to tell the optimiser that the result set will have many rows. In this case,
MySQL will directly use disk-based temporary tables if needed.
MySQL will also, in this case, prefer sorting to doing a
temporary table with a key on the GROUP BY elements.
SQL_BUFFER_RESULT will force the result to be put into a temporary
table. This will help MySQL free the table locks early and will help
in cases where it takes a long time to send the result set to the client.
SQL_SMALL_RESULT, a MySQL-specific option, can be used
with GROUP BY or DISTINCT to tell the optimiser that the
result set will be small. In this case, MySQL will use fast
temporary tables to store the resulting table instead of using sorting. In
MySQL Version 3.23 this shouldn't normally be needed.
SQL_CALC_FOUND_ROWS (version 4.0.0 and up) tells MySQL to calculate
how many rows there would be in the result set, disregarding any
LIMIT clause.
The number of rows can then be retrieved with SELECT FOUND_ROWS().
See section 6.3.6.2 Miscellaneous Functions.
Please note that in versions prior to 4.1.0 this does not work with
LIMIT 0, which is optimised to return instantly (resulting in a
row count of 0). See section 5.2.8 How MySQL Optimises LIMIT.
SQL_CACHE tells MySQL to store the query result in the query cache
if you are using QUERY_CACHE_TYPE=2 (DEMAND).
See section 6.9 MySQL Query Cache. In case of query with UNIONs and/or subqueries this
option will take effect to be used in any SELECT of the query.
SQL_NO_CACHE tells MySQL to not allow the query result to be stored
in the query cache. See section 6.9 MySQL Query Cache. In case of query with UNIONs
and/or subqueries this option will take effect to be used in any SELECT
of the query.
GROUP BY, the output rows will be sorted according to the
GROUP BY as if you would have had an ORDER BY over all the fields
in the GROUP BY. MySQL has extended the GROUP BY so that
you can also specify ASC and DESC to GROUP BY:
SELECT a,COUNT(b) FROM test_table GROUP BY a DESC
GROUP BY to allow you to
select fields which are not mentioned in the GROUP BY clause.
If you are not getting the results you expect from your query, please
read the GROUP BY description.
See section 6.3.7 Functions for Use with GROUP BY Clauses.
STRAIGHT_JOIN forces the optimiser to join the tables in the order in
which they are listed in the FROM clause. You can use this to speed up
a query if the optimiser joins the tables in non-optimal order.
See section 5.2.1 EXPLAIN Syntax (Get Information About a SELECT).
LIMIT clause can be used to constrain the number of rows returned
by the SELECT statement. LIMIT takes one or two numeric
arguments. The arguments must be integer constants.
If two arguments are given, the first specifies the offset of the first row to
return, the second specifies the maximum number of rows to return.
The offset of the initial row is 0 (not 1):
To be compatible with PostgreSQL MySQL also supports the syntax:
LIMIT # OFFSET #.
mysql> SELECT * FROM table LIMIT 5,10; # Retrieve rows 6-15To retrieve all rows from a certain offset up to the end of the result set, you can use -1 for the second parameter:
mysql> SELECT * FROM table LIMIT 95,-1; # Retrieve rows 96-last.If one argument is given, it indicates the maximum number of rows to return:
mysql> SELECT * FROM table LIMIT 5; # Retrieve first 5 rowsIn other words,
LIMIT n is equivalent to LIMIT 0,n.
SELECT ... INTO OUTFILE 'file_name' form of SELECT writes
the selected rows to a file. The file is created on the server host and
cannot already exist (among other things, this prevents database tables and
files such as `/etc/passwd' from being destroyed). You must have the
FILE privilege on the server host to use this form of SELECT.
SELECT ... INTO OUTFILE is mainly intended to let you very
quickly dump a table on the server machine. If you want to create the
resulting file on some other host than the server host you can't use
SELECT ... INTO OUTFILE. In this case you should instead use some
client program like mysqldump --tab or mysql -e "SELECT
..." > outfile to generate the file.
SELECT ... INTO OUTFILE is the complement of LOAD DATA
INFILE; the syntax for the export_options part of the statement
consists of the same FIELDS and LINES clauses that are used
with the LOAD DATA INFILE statement.
See section 6.4.9 LOAD DATA INFILE Syntax.
In the resulting text file, only the following characters are escaped by
the ESCAPED BY character:
ESCAPED BY character
FIELDS TERMINATED BY
LINES TERMINATED BY
ASCII 0 is converted to ESCAPED BY followed by 0
(ASCII 48).
The reason for the above is that you must escape any FIELDS
TERMINATED BY, ESCAPED BY, or LINES TERMINATED BY
characters to reliably be able to read the file back. ASCII 0 is
escaped to make it easier to view with some pagers.
As the resulting file doesn't have to conform to the SQL syntax, nothing
else need be escaped.
Here follows an example of getting a file in the format used by many
old programs.
SELECT a,b,a+b INTO OUTFILE "/tmp/result.text" FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY "\n" FROM test_table;
INTO DUMPFILE instead of INTO OUTFILE, MySQL
will only write one row into the file, without any column or line
terminations and without any escaping. This is useful if you want to
store a blob in a file.
INTO OUTFILE and INTO
DUMPFILE is going to be writeable for all users! The reason is that the
MySQL server can't create a file that is owned by anyone else
than the user it's running as (you should never run mysqld as root),
the file has to be world-writeable so that you can manipulate it.
FOR UPDATE on a storage engine with page/row locks,
the examined rows will be write locked.
JOIN Syntax
MySQL supports the following JOIN syntaxes for use in
SELECT statements:
table_reference, table_reference
table_reference [CROSS] JOIN table_reference
table_reference INNER JOIN table_reference join_condition
table_reference STRAIGHT_JOIN table_reference
table_reference LEFT [OUTER] JOIN table_reference join_condition
table_reference LEFT [OUTER] JOIN table_reference
table_reference NATURAL [LEFT [OUTER]] JOIN table_reference
{ OJ table_reference LEFT OUTER JOIN table_reference ON conditional_expr }
table_reference RIGHT [OUTER] JOIN table_reference join_condition
table_reference RIGHT [OUTER] JOIN table_reference
table_reference NATURAL [RIGHT [OUTER]] JOIN table_reference
Where table_reference is defined as:
table_name [[AS] alias] [[USE INDEX (key_list)] | [IGNORE INDEX (key_list)] | [FORCE INDEX (key_list)]]
and join_condition is defined as:
ON conditional_expr | USING (column_list)
You should generally not have any conditions in the ON part that are
used to restrict which rows you have in the result set (there are exceptions
to this rule). If you want to restrict which rows should be in the result,
you have to do this in the WHERE clause.
Note that in versions before Version 3.23.17, the INNER JOIN didn't
take a join_condition!
The last LEFT OUTER JOIN syntax shown above exists only for
compatibility with ODBC:
tbl_name AS alias_name or
tbl_name alias_name:
mysql> SELECT t1.name, t2.salary FROM employee AS t1, info AS t2
-> WHERE t1.name = t2.name;
ON conditional is any conditional of the form that may be used in
a WHERE clause.
ON or
USING part in a LEFT JOIN, a row with all columns set to
NULL is used for the right table. You can use this fact to find
records in a table that have no counterpart in another table:
mysql> SELECT table1.* FROM table1
-> LEFT JOIN table2 ON table1.id=table2.id
-> WHERE table2.id IS NULL;
This example finds all rows in table1 with an id value that is
not present in table2 (that is, all rows in table1 with no
corresponding row in table2). This assumes that table2.id is
declared NOT NULL, of course. See section 5.2.6 How MySQL Optimises LEFT JOIN and RIGHT JOIN.
USING (column_list) clause names a list of columns that must
exist in both tables. A USING clause such as:
A LEFT JOIN B USING (C1,C2,C3,...)is defined to be semantically identical to an
ON expression like
this:
A.C1=B.C1 AND A.C2=B.C2 AND A.C3=B.C3,...
NATURAL [LEFT] JOIN of two tables is defined to be
semantically equivalent to an INNER JOIN or a LEFT JOIN
with a USING clause that names all columns that exist in both
tables.
INNER JOIN and , (comma) are semantically equivalent.
Both do a full join between the tables used. Normally, you specify
how the tables should be linked in the WHERE condition.
RIGHT JOIN works analogously as LEFT JOIN. To keep code
portable across databases, it's recommended to use LEFT JOIN
instead of RIGHT JOIN.
STRAIGHT_JOIN is identical to JOIN, except that the left table
is always read before the right table. This can be used for those (few)
cases where the join optimiser puts the tables in the wrong order.
EXPLAIN shows that MySQL is
using the wrong index from the list of possible indexes. By specifying
USE INDEX (key_list), you can tell MySQL to use only one of the
possible indexes to find rows in the table. The alternative syntax
IGNORE INDEX (key_list) can be used to tell MySQL to not use some
particular index.
In MySQL 4.0.9 you can also use FORCE INDEX. This acts likes
USE INDEX (key_list) but with the addition that a table scan
is assumed to be VERY expensive. In other words a table scan will
only be used if there is no way to use one of the given index to
find rows in the table.
USE/IGNORE KEY are synonyms for USE/IGNORE INDEX.
Some examples:
mysql> SELECT * FROM table1,table2 WHERE table1.id=table2.id;
mysql> SELECT * FROM table1 LEFT JOIN table2 ON table1.id=table2.id;
mysql> SELECT * FROM table1 LEFT JOIN table2 USING (id);
mysql> SELECT * FROM table1 LEFT JOIN table2 ON table1.id=table2.id
-> LEFT JOIN table3 ON table2.id=table3.id;
mysql> SELECT * FROM table1 USE INDEX (key1,key2)
-> WHERE key1=1 AND key2=2 AND key3=3;
mysql> SELECT * FROM table1 IGNORE INDEX (key3)
-> WHERE key1=1 AND key2=2 AND key3=3;
See section 5.2.6 How MySQL Optimises LEFT JOIN and RIGHT JOIN.
UNION SyntaxSELECT ... UNION [ALL] SELECT ... [UNION SELECT ...]
UNION is implemented in MySQL 4.0.0.
UNION is used to combine the result from many SELECT
statements into one result set.
The columns listed in the select_expression portion of the SELECT
should have the same type. The column names used in the first
SELECT query will be used as the column names for the results
returned.
The SELECT commands are normal select commands, but with the following
restrictions:
SELECT command can have INTO OUTFILE.
If you don't use the keyword ALL for the UNION, all
returned rows will be unique, as if you had done a DISTINCT for
the total result set. If you specify ALL, then you will get all
matching rows from all the used SELECT statements.
If you want to use an ORDER BY for the total UNION result,
you should use parentheses:
(SELECT a FROM table_name WHERE a=10 AND B=1 ORDER BY a LIMIT 10) UNION (SELECT a FROM table_name WHERE a=11 AND B=2 ORDER BY a LIMIT 10) ORDER BY a;
HANDLER Syntax
HANDLER tbl_name OPEN [ AS alias ]
HANDLER tbl_name READ index_name { = | >= | <= | < } (value1,value2,...)
[ WHERE ... ] [LIMIT ... ]
HANDLER tbl_name READ index_name { FIRST | NEXT | PREV | LAST }
[ WHERE ... ] [LIMIT ... ]
HANDLER tbl_name READ { FIRST | NEXT }
[ WHERE ... ] [LIMIT ... ]
HANDLER tbl_name CLOSE
The HANDLER statement provides direct access to the MyISAM table
storage engine interface.
The first form of HANDLER statement opens a table, making
it accessible via subsequent HANDLER ... READ statements.
This table object is not shared by other threads and will not be closed
until the thread calls HANDLER tbl_name CLOSE or the thread dies.
The second form fetches one row (or more, specified by LIMIT clause)
where the index specified complies to the condition and WHERE
condition is met. If the index consists of several parts (spans over
several columns) the values are specified in comma-separated list,
providing values only for few first columns is possible.
The third form fetches one row (or more, specified by LIMIT clause)
from the table in index order, matching WHERE condition.
The fourth form (without index specification) fetches one row (or more, specified
by LIMIT clause) from the table in natural row order (as stored
in datafile) matching WHERE condition. It is faster than
HANDLER tbl_name READ index_name when a full table scan is desired.
HANDLER ... CLOSE closes a table that was opened with
HANDLER ... OPEN.
Note: If you're using HANDLER interface for PRIMARY KEY you should remember
to quote the name: HANDLER tbl READ `PRIMARY` > (...)
HANDLER is a somewhat low-level statement. For example, it does
not provide consistency. That is, HANDLER ... OPEN does NOT
take a snapshot of the table, and does NOT lock the table. This
means that after a HANDLER ... OPEN is issued, table data can be
modified (by this or any other thread) and these modifications may appear
only partially in HANDLER ... NEXT or HANDLER ... PREV scans.
The reasons to use this interface instead of normal SQL are:
SELECT because:
HANDLER OPEN.
INSERT Syntax
INSERT [LOW_PRIORITY | DELAYED] [IGNORE]
[INTO] tbl_name [(col_name,...)]
VALUES ((expression | DEFAULT),...),(...),...
[ ON DUPLICATE KEY UPDATE col_name=expression, ... ]
or INSERT [LOW_PRIORITY | DELAYED] [IGNORE]
[INTO] tbl_name [(col_name,...)]
SELECT ...
or INSERT [LOW_PRIORITY | DELAYED] [IGNORE]
[INTO] tbl_name
SET col_name=(expression | DEFAULT), ...
[ ON DUPLICATE KEY UPDATE col_name=expression, ... ]
INSERT inserts new rows into an existing table. The INSERT
... VALUES form of the statement inserts rows based on explicitly
specified values. The INSERT ... SELECT form inserts rows
selected from another table or tables. The INSERT ... VALUES
form with multiple value lists is supported in MySQL Version
3.22.5 or later. The col_name=expression syntax is supported in
MySQL Version 3.22.10 or later.
tbl_name is the table into which rows should be inserted. The column
name list or the SET clause indicates which columns the statement
specifies values for:
INSERT ... VALUES or INSERT
... SELECT, values for all columns must be provided in the
VALUES() list or by the SELECT. If you don't know the order of
the columns in the table, use DESCRIBE tbl_name to find out.
CREATE TABLE Syntax.
You can also use the keyword DEFAULT to set a column to its
default value. (New in MySQL 4.0.3.) This makes it easier to write
INSERT statements that assign values to all but a few columns,
because it allows you to avoid writing an incomplete VALUES() list
(a list that does not include a value for each column in the table).
Otherwise, you would have to write out the list of column names
corresponding to each value in the VALUES() list.
MySQL always has a default value for all fields. This is something
that is imposed on MySQL to be able to work with both transactional
and not transactional tables.
Our view is that checking of fields content should be done in the
application and not in the database server.
expression may refer to any column that was set earlier in a value
list. For example, you can say this:
mysql> INSERT INTO tbl_name (col1,col2) VALUES(15,col1*2);But not this:
mysql> INSERT INTO tbl_name (col1,col2) VALUES(col2*2,15);
LOW_PRIORITY, execution of the
INSERT is delayed until no other clients are reading from the
table. In this case the client has to wait until the insert statement
is completed, which may take a long time if the table is in heavy
use. This is in contrast to INSERT DELAYED, which lets the client
continue at once. See section 6.4.4 INSERT DELAYED Syntax. Note that LOW_PRIORITY
should normally not be used with MyISAM tables as this disables
concurrent inserts. See section 7.1 MyISAM Tables.
IGNORE in an INSERT with many value
rows, any rows that duplicate an existing PRIMARY or UNIQUE
key in the table are ignored and are not inserted. If you do not specify
IGNORE, the insert is aborted if there is any row that duplicates an
existing key value. You can determine with the C API function
mysql_info() how many rows were inserted into the table.
ON DUPLICATE KEY UPDATE clause (new in MySQL 4.1.0), and
a row is inserted that would cause a duplicate value in a PRIMARY or
UNIQUE key, an UPDATE of the old row is performed. For
example, the command:
mysql> INSERT INTO table (a,b,c) VALUES (1,2,3) --> ON DUPLICATE KEY UPDATE c=c+1;in case of column
a is declared as UNIQUE and already
holds 1 once, would be identical to the
mysql> UPDATE table SET c=c+1 WHERE a=1;Note: that if column
b is unique too, the
UPDATE command would be written as
mysql> UPDATE table SET c=c+1 WHERE a=1 OR b=2 LIMIT 1;and if
a=1 OR b=2 matches several rows, only one row
will be updated! In general, one should try to avoid using
ON DUPLICATE KEY clause on tables with multiple UNIQUE keys.
Since MySQL 4.1.1 one can use function VALUES(col_name)
to refer to the column value in the INSERT part of the
INSERT ... UPDATE command - that is the value that would be
inserted if there would be no duplicate key conflict. This function
especially useful in multiple-row inserts. Naturally VALUES()
function is only meaningful in INSERT ... UPDATE command
and returns NULL otherwise.
Example:
mysql> INSERT INTO table (a,b,c) VALUES (1,2,3),(4,5,6) --> ON DUPLICATE KEY UPDATE c=VALUES(a)+VALUES(b);The command above is identical to
mysql> INSERT INTO table (a,b,c) VALUES (1,2,3) --> ON DUPLICATE KEY UPDATE c=3; mysql> INSERT INTO table (a,b,c) VALUES (4,5,6) --> ON DUPLICATE KEY UPDATE c=9;When one uses
ON DUPLICATE KEY UPDATE,
the DELAYED option is ignored.
DONT_USE_DEFAULT_FIELDS
option, INSERT statements generate an error unless you explicitly
specify values for all columns that require a non-NULL value.
See section 2.3.3 Typical configure Options.
AUTO_INCREMENT column
with the mysql_insert_id function.
See section 8.1.3.130 mysql_insert_id().
If you use INSERT ... SELECT or an INSERT ... VALUES
statement with multiple value lists, you can use the C API function
mysql_info() to get information about the query. The format of the
information string is shown here:
Records: 100 Duplicates: 0 Warnings: 0
Duplicates indicates the number of rows that couldn't be inserted
because they would duplicate some existing unique index value.
Warnings indicates the number of attempts to insert column values that
were problematic in some way. Warnings can occur under any of the following
conditions:
NULL into a column that has been declared NOT NULL.
The column is set to its default value.
'10.34 a'. The trailing
garbage is stripped and the remaining numeric part is inserted. If the value
doesn't make sense as a number at all, the column is set to 0.
CHAR, VARCHAR, TEXT, or
BLOB column that exceeds the column's maximum length. The value is
truncated to the column's maximum length.
INSERT ... SELECT SyntaxINSERT [LOW_PRIORITY] [IGNORE] [INTO] tbl_name [(column list)] SELECT ...
With INSERT ... SELECT statement you can quickly insert many rows
into a table from one or many tables.
INSERT INTO tblTemp2 (fldID) SELECT tblTemp1.fldOrder_ID FROM tblTemp1 WHERE tblTemp1.fldOrder_ID > 100;
The following conditions hold for an INSERT ... SELECT statement:
INSERT statement cannot appear in the
FROM clause of the SELECT part of the query because it's
forbidden in standard SQL to SELECT from the same table into which you are
inserting. (The problem is that the SELECT possibly would
find records that were inserted earlier during the same run. When using
subquery clauses, the situation could easily be very confusing!)
AUTO_INCREMENT columns work as usual.
mysql_info() to get information about
the query. See section 6.4.3 INSERT Syntax.
INSERT ... SELECT.
You can of course also use REPLACE instead of INSERT to
overwrite old rows.
INSERT DELAYED SyntaxINSERT DELAYED ...
The DELAYED option for the INSERT statement is a
MySQL-specific option that is very useful if you have clients
that can't wait for the INSERT to complete. This is a common
problem when you use MySQL for logging and you also
periodically run SELECT and UPDATE statements that take a
long time to complete. DELAYED was introduced in MySQL
Version 3.22.15. It is a MySQL extension to SQL-92.
INSERT DELAYED only works with ISAM and MyISAM
tables. Note that as MyISAM tables supports concurrent
SELECT and INSERT, if there is no free blocks in the
middle of the datafile, you very seldom need to use INSERT
DELAYED with MyISAM. See section 7.1 MyISAM Tables.
When you use INSERT DELAYED, the client will get an OK at once
and the row will be inserted when the table is not in use by any other thread.
Another major benefit of using INSERT DELAYED is that inserts
from many clients are bundled together and written in one block. This is much
faster than doing many separate inserts.
Note that currently the queued rows are only stored in memory until they are
inserted into the table. This means that if you kill mysqld
the hard way (kill -9) or if mysqld dies unexpectedly, any
queued rows that weren't written to disk are lost!
The following describes in detail what happens when you use the
DELAYED option to INSERT or REPLACE. In this
description, the ``thread'' is the thread that received an INSERT
DELAYED command and ``handler'' is the thread that handles all
INSERT DELAYED statements for a particular table.
DELAYED statement for a table, a handler
thread is created to process all DELAYED statements for the table, if
no such handler already exists.
DELAYED
lock already; if not, it tells the handler thread to do so. The
DELAYED lock can be obtained even if other threads have a READ
or WRITE lock on the table. However, the handler will wait for all
ALTER TABLE locks or FLUSH TABLES to ensure that the table
structure is up to date.
INSERT statement, but instead of writing
the row to the table, it puts a copy of the final row into a queue that
is managed by the handler thread. Any syntax errors are noticed by the
thread and reported to the client program.
AUTO_INCREMENT
value for the resulting row; it can't obtain them from the server, because
the INSERT returns before the insert operation has been completed. If
you use the C API, the mysql_info() function doesn't return anything
meaningful, for the same reason.
delayed_insert_limit rows are written, the handler checks
whether any SELECT statements are still pending. If so, it
allows these to execute before continuing.
INSERT DELAYED commands are received within
delayed_insert_timeout seconds, the handler terminates.
delayed_queue_size rows are pending already in a
specific handler queue, the thread requesting INSERT DELAYED
waits until there is room in the queue. This is done to ensure that
the mysqld server doesn't use all memory for the delayed memory
queue.
delayed_insert in the Command column. It will
be killed if you execute a FLUSH TABLES command or kill it with
KILL thread_id. However, it will first store all queued rows into the
table before exiting. During this time it will not accept any new
INSERT commands from another thread. If you execute an INSERT
DELAYED command after this, a new handler thread will be created.
Note that the above means that INSERT DELAYED commands have higher
priority than normal INSERT commands if there is an INSERT
DELAYED handler already running! Other update commands will have to wait
until the INSERT DELAYED queue is empty, someone kills the handler
thread (with KILL thread_id), or someone executes FLUSH TABLES.
INSERT
DELAYED commands:
| Variable | Meaning |
Delayed_insert_threads | Number of handler threads |
Delayed_writes | Number of rows written with INSERT DELAYED
|
Not_flushed_delayed_rows | Number of rows waiting to be written |
SHOW STATUS statement or
by executing a mysqladmin extended-status command.
Note that INSERT DELAYED is slower than a normal INSERT if the
table is not in use. There is also the additional overhead for the
server to handle a separate thread for each table on which you use
INSERT DELAYED. This means that you should only use INSERT
DELAYED when you are really sure you need it!
UPDATE Syntax
UPDATE [LOW_PRIORITY] [IGNORE] tbl_name
SET col_name1=expr1 [, col_name2=expr2 ...]
[WHERE where_definition]
[ORDER BY ...]
[LIMIT rows]
or
UPDATE [LOW_PRIORITY] [IGNORE] tbl_name [, tbl_name ...]
SET col_name1=expr1 [, col_name2=expr2 ...]
[WHERE where_definition]
UPDATE updates columns in existing table rows with new values.
The SET clause indicates which columns to modify and the values
they should be given. The WHERE clause, if given, specifies
which rows should be updated. Otherwise, all rows are updated. If the
ORDER BY clause is specified, the rows will be updated in the
order that is specified.
If you specify the keyword LOW_PRIORITY, execution of the
UPDATE is delayed until no other clients are reading from the table.
If you specify the keyword IGNORE, the update statement will not
abort even if we get duplicate key errors during the update. Rows that
would cause conflicts will not be updated.
If you access a column from tbl_name in an expression,
UPDATE uses the current value of the column. For example, the
following statement sets the age column to one more than its
current value:
mysql> UPDATE persondata SET age=age+1;
UPDATE assignments are evaluated from left to right. For example, the
following statement doubles the age column, then increments it:
mysql> UPDATE persondata SET age=age*2, age=age+1;
If you set a column to the value it currently has, MySQL notices this and doesn't update it.
UPDATE returns the number of rows that were actually changed.
In MySQL Version 3.22 or later, the C API function mysql_info()
returns the number of rows that were matched and updated and the number of
warnings that occurred during the UPDATE.
Starting from MySQL version 3.23, you can use LIMIT # to ensure
that only a given number of rows are changed. MySQL will stop the
update as soon as it has found LIMIT rows that satisfies the
WHERE clause, independent of the rows changed content or not.
If an ORDER BY clause is used (available from MySQL 4.0.0), the rows
will be updated in that order. This is really only useful in conjunction
with LIMIT.
Starting with MySQL Version 4.0.4, you can also perform UPDATE
operations that cover multiple tables:
UPDATE items,month SET items.price=month.price WHERE items.id=month.id;
Note: you can not use ORDER BY or LIMIT with multi-table
UPDATE.
DELETE Syntax
DELETE [LOW_PRIORITY] [QUICK] FROM table_name
[WHERE where_definition]
[ORDER BY ...]
[LIMIT rows]
or
DELETE [LOW_PRIORITY] [QUICK] table_name[.*] [, table_name[.*] ...]
FROM table-references
[WHERE where_definition]
or
DELETE [LOW_PRIORITY] [QUICK]
FROM table_name[.*] [, table_name[.*] ...]
USING table-references
[WHERE where_definition]
DELETE deletes rows from table_name that satisfy the condition
given by where_definition, and returns the number of records deleted.
If you issue a DELETE with no WHERE clause, all rows are
deleted. If you do this in AUTOCOMMIT mode, this works as
TRUNCATE. See section 6.4.7 TRUNCATE Syntax. In MySQL 3.23,
DELETE without a WHERE clause will return zero as the number
of affected records.
If you really want to know how many records are deleted when you are deleting
all rows, and are willing to suffer a speed penalty, you can use a
DELETE statement of this form:
mysql> DELETE FROM table_name WHERE 1>0;
Note that this is much slower than DELETE FROM table_name with no
WHERE clause, because it deletes rows one at a time.
If you specify the keyword LOW_PRIORITY, execution of the
DELETE is delayed until no other clients are reading from the table.
If you specify the word QUICK then the storage engine will not
merge index leaves during delete, which may speed up certain kind of
deletes.
In MyISAM tables, deleted records are maintained in a linked list and
subsequent INSERT operations reuse old record positions. To
reclaim unused space and reduce file-sizes, use the OPTIMIZE
TABLE statement or the myisamchk utility to reorganise tables.
OPTIMIZE TABLE is easier, but myisamchk is faster. See
section 4.5.1 OPTIMIZE TABLE Syntax and section 4.4.6.10 Table Optimisation.
The first multi-table delete format is supported starting from MySQL 4.0.0. The second multi-table delete format is supported starting from MySQL 4.0.2.
The idea is that only matching rows from the tables listed
before the FROM or before the USING clause are
deleted. The effect is that you can delete rows from many tables at the
same time and also have additional tables that are used for searching.
The .* after the table names is there just to be compatible with
Access:
DELETE t1,t2 FROM t1,t2,t3 WHERE t1.id=t2.id AND t2.id=t3.id or DELETE FROM t1,t2 USING t1,t2,t3 WHERE t1.id=t2.id AND t2.id=t3.id
In the above case we delete matching rows just from tables t1 and
t2.
If an ORDER BY clause is used (available from MySQL 4.0.0), the rows
will be deleted in that order. This is really only useful in conjunction
with LIMIT. For example:
DELETE FROM somelog WHERE user = 'jcole' ORDER BY timestamp LIMIT 1
This will delete the oldest entry (by timestamp) where the row matches
the WHERE clause.
The MySQL-specific LIMIT rows option to DELETE tells
the server the maximum number of rows to be deleted before control is
returned to the client. This can be used to ensure that a specific
DELETE command doesn't take too much time. You can simply repeat
the DELETE command until the number of affected rows is less than
the LIMIT value.
From MySQL 4.0, you can specify multiple tables in the DELETE
statement to delete rows from one table depending on a particular condition
in multiple tables. However, you can not use ORDER BY or LIMIT
in a multi-table DELETE.
TRUNCATE SyntaxTRUNCATE TABLE table_name
In 3.23 TRUNCATE TABLE is mapped to
COMMIT ; DELETE FROM table_name. See section 6.4.6 DELETE Syntax.
TRUNCATE TABLE differs from DELETE FROM ...
in the following ways:
TRUNCATE is an Oracle SQL extension.
REPLACE Syntax
REPLACE [LOW_PRIORITY | DELAYED]
[INTO] tbl_name [(col_name,...)]
VALUES (expression,...),(...),...
or REPLACE [LOW_PRIORITY | DELAYED]
[INTO] tbl_name [(col_name,...)]
SELECT ...
or REPLACE [LOW_PRIORITY | DELAYED]
[INTO] tbl_name
SET col_name=expression, col_name=expression,...
REPLACE works exactly like INSERT, except that if an old
record in the table has the same value as a new record on a UNIQUE
index or PRIMARY KEY, the old record is deleted before the new
record is inserted.
See section 6.4.3 INSERT Syntax.
In other words, you can't access the values of the old row from a
REPLACE statement. In some old MySQL versions it appeared that
you could do this, but that was a bug that has been corrected.
To be able to use REPLACE you must have INSERT and
DELETE privileges for the table.
When you use a REPLACE command, mysql_affected_rows()
will return 2 if the new row replaced an old row. This is because
one row was inserted after the duplicate was deleted.
This fact makes it easy to determine whether REPLACE added
or replaced a row: check whether the affected-rows value is 1 (added)
or 2 (replaced).
Note that unless you use a UNIQUE index or PRIMARY KEY,
using a REPLACE command makes no sense, since it would just do
an INSERT.
Here follows the used algorithm in more detail:
(This is also used with LOAD DATA ... REPLACE.
- Insert the row into the table
- While duplicate key error for primary or unique key
- Revert changed keys
- Read conflicting row from the table through the duplicate key value
- Delete conflicting row
- Try again to insert the original primary key and unique keys in the tree
LOAD DATA INFILE Syntax
LOAD DATA [LOW_PRIORITY | CONCURRENT] [LOCAL] INFILE 'file_name.txt'
[REPLACE | IGNORE]
INTO TABLE tbl_name
[FIELDS
[TERMINATED BY '\t']
[[OPTIONALLY] ENCLOSED BY '']
[ESCAPED BY '\\' ]
]
[LINES
[STARTING BY '']
[TERMINATED BY '\n']
]
[IGNORE number LINES]
[(col_name,...)]
The LOAD DATA INFILE statement reads rows from a text file into a
table at a very high speed. If the LOCAL keyword is specified, the
file is read from the client host. If LOCAL is not specified, the
file must be located on the server. (LOCAL is available in
MySQL Version 3.22.6 or later.)
For security reasons, when reading text files located on the server, the
files must either reside in the database directory or be readable by all.
Also, to use LOAD DATA INFILE on server files, you must have the
FILE privilege on the server host.
See section 4.2.7 Privileges Provided by MySQL.
In MySQL 3.23.49 and MySQL 4.0.2 LOCAL will only work if you have
not started mysqld with --local-infile=0 or if you
have not enabled your client to support LOCAL. See section 4.2.4 Security issues with LOAD DATA LOCAL.
If you specify the keyword LOW_PRIORITY, execution of the
LOAD DATA statement is delayed until no other clients are reading
from the table.
If you specify the keyword CONCURRENT with a MyISAM table,
then other threads can retrieve data from the table while LOAD
DATA is executing. Using this option will of course affect the
performance of LOAD DATA a bit even if no other thread is using
the table at the same time.
Using LOCAL will be a bit slower than letting the server access the
files directly, because the contents of the file must travel from the client
host to the server host. On the other hand, you do not need the
FILE privilege to load local files.
If you are using MySQL before Version 3.23.24 you can't read from a
FIFO with LOAD DATA INFILE. If you need to read from a FIFO (for
example the output from gunzip), use LOAD DATA LOCAL INFILE
instead.
You can also load datafiles by using the mysqlimport utility; it
operates by sending a LOAD DATA INFILE command to the server. The
--local option causes mysqlimport to read datafiles from the
client host. You can specify the --compress option to get better
performance over slow networks if the client and server support the
compressed protocol.
When locating files on the server host, the server uses the following rules:
Note that these rules mean a file given as `./myfile.txt' is read from
the server's data directory, whereas a file given as `myfile.txt' is
read from the database directory of the current database. For example,
the following LOAD DATA statement reads the file `data.txt'
from the database directory for db1 because db1 is the current
database, even though the statement explicitly loads the file into a
table in the db2 database:
mysql> USE db1; mysql> LOAD DATA INFILE "data.txt" INTO TABLE db2.my_table;
The REPLACE and IGNORE keywords control handling of input
records that duplicate existing records on unique key values.
If you specify REPLACE, new rows replace existing rows (in other
words rows that has the same value for a primary or unique index as an
existing row). See section 6.4.8 REPLACE Syntax.
If you specify IGNORE, input rows that duplicate an existing row
on a unique key value are skipped. If you don't specify either option,
an error occurs when a duplicate key value is found, and the rest of the
text file is ignored.
If you want to ignore foreign key constraints during load you can do
SET FOREIGN_KEY_CHECKS=0 before executing LOAD DATA.
If you load data from a local file using the LOCAL keyword, the server
has no way to stop transmission of the file in the middle of the operation,
so the default behaviour is the same as if IGNORE is specified.
If you use LOAD DATA INFILE on an empty MyISAM table, all
non-unique indexes are created in a separate batch (like in
REPAIR). This normally makes LOAD DATA INFILE much faster
when you have many indexes. Normally this is very fast, but in some
extreme cases you can create the indexes even faster by turning them off
with ALTER TABLE .. DISABLE KEYS and use ALTER TABLE .. ENABLE
KEYS to recreate the indexes.
See section 4.4.6 Using myisamchk for Table Maintenance and Crash Recovery.
LOAD DATA INFILE is the complement of SELECT ... INTO OUTFILE.
See section 6.4.1 SELECT Syntax.
To write data from a database to a file, use SELECT ... INTO OUTFILE.
To read the file back into the database, use LOAD DATA INFILE.
The syntax of the FIELDS and LINES clauses is the same for
both commands. Both clauses are optional, but FIELDS
must precede LINES if both are specified.
If you specify a FIELDS clause,
each of its subclauses (TERMINATED BY, [OPTIONALLY] ENCLOSED
BY, and ESCAPED BY) is also optional, except that you must
specify at least one of them.
If you don't specify a FIELDS clause, the defaults are the
same as if you had written this:
FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\'
If you don't specify a LINES clause, the default
is the same as if you had written this:
LINES TERMINATED BY '\n'
Note: If you have generated the text file on a Windows system
you may have to change the above to: LINES TERMINATED BY '\r\n'
as Windows uses two characters as a line terminator. Some programs, like
wordpad, may use \r as a line terminator.
If all the lines you want to read in has a common prefix that you want
to skip, you can use LINES STARTING BY prefix_string for this.
In other words, the defaults cause LOAD DATA INFILE to act as follows
when reading input:
LINES STARTING BY prefix is used, read until prefix is found
and start reading at character after prefix. If line doesn't include prefix
it will be skipped.
Conversely, the defaults cause SELECT ... INTO OUTFILE to act as
follows when writing output:
Note that to write FIELDS ESCAPED BY '\\', you must specify two
backslashes for the value to be read as a single backslash.
The IGNORE number LINES option can be used to ignore a header of
column names at the start of the file:
mysql> LOAD DATA INFILE "/tmp/file_name" INTO TABLE test IGNORE 1 LINES;
When you use SELECT ... INTO OUTFILE in tandem with LOAD
DATA INFILE to write data from a database into a file and then read
the file back into the database later, the field and line handling
options for both commands must match. Otherwise, LOAD DATA
INFILE will not interpret the contents of the file properly. Suppose
you use SELECT ... INTO OUTFILE to write a file with
fields delimited by commas:
mysql> SELECT * INTO OUTFILE 'data.txt'
-> FIELDS TERMINATED BY ','
-> FROM ...;
To read the comma-delimited file back in, the correct statement would be:
mysql> LOAD DATA INFILE 'data.txt' INTO TABLE table2
-> FIELDS TERMINATED BY ',';
If instead you tried to read in the file with the statement shown here, it
wouldn't work because it instructs LOAD DATA INFILE to look for
tabs between fields:
mysql> LOAD DATA INFILE 'data.txt' INTO TABLE table2
-> FIELDS TERMINATED BY '\t';
The likely result is that each input line would be interpreted as a single field.
LOAD DATA INFILE can be used to read files obtained from
external sources, too. For example, a file in dBASE format will have
fields separated by commas and enclosed in double quotes. If lines in
the file are terminated by newlines, the command shown here
illustrates the field and line handling options you would use to load
the file:
mysql> LOAD DATA INFILE 'data.txt' INTO TABLE tbl_name
-> FIELDS TERMINATED BY ',' ENCLOSED BY '"'
-> LINES TERMINATED BY '\n';
Any of the field or line handling options may specify an empty string
(''). If not empty, the FIELDS [OPTIONALLY] ENCLOSED BY
and FIELDS ESCAPED BY values must be a single character. The
FIELDS TERMINATED BY and LINES TERMINATED BY values may
be more than one character. For example, to write lines that are
terminated by carriage return-linefeed pairs, or to read a file
containing such lines, specify a LINES TERMINATED BY '\r\n'
clause.
For example, to read a file of jokes, that are separated with a line
of %%, into a SQL table you can do:
CREATE TABLE jokes (a INT NOT NULL AUTO_INCREMENT PRIMARY KEY, joke TEXT NOT NULL); LOAD DATA INFILE "/tmp/jokes.txt" INTO TABLE jokes FIELDS TERMINATED BY "" LINES TERMINATED BY "\n%%\n" (joke);
FIELDS [OPTIONALLY] ENCLOSED BY controls quoting of fields. For
output (SELECT ... INTO OUTFILE), if you omit the word
OPTIONALLY, all fields are enclosed by the ENCLOSED BY
character. An example of such output (using a comma as the field
delimiter) is shown here:
"1","a string","100.20" "2","a string containing a , comma","102.20" "3","a string containing a \" quote","102.20" "4","a string containing a \", quote and comma","102.20"
If you specify OPTIONALLY, the ENCLOSED BY character is
used only to enclose CHAR and VARCHAR fields:
1,"a string",100.20 2,"a string containing a , comma",102.20 3,"a string containing a \" quote",102.20 4,"a string containing a \", quote and comma",102.20
Note that occurrences of the ENCLOSED BY character within a
field value are escaped by prefixing them with the ESCAPED BY
character. Also note that if you specify an empty ESCAPED BY
value, it is possible to generate output that cannot be read properly by
LOAD DATA INFILE. For example, the output just shown above would
appear as shown here if the escape character is empty. Observe that the
second field in the fourth line contains a comma following the quote, which
(erroneously) appears to terminate the field:
1,"a string",100.20 2,"a string containing a , comma",102.20 3,"a string containing a " quote",102.20 4,"a string containing a ", quote and comma",102.20
For input, the ENCLOSED BY character, if present, is stripped from the
ends of field values. (This is true whether OPTIONALLY is
specified; OPTIONALLY has no effect on input interpretation.)
Occurrences of the ENCLOSED BY character preceded by the
ESCAPED BY character are interpreted as part of the current field
value. In addition, duplicated ENCLOSED BY characters occurring
within fields are interpreted as single ENCLOSED BY characters if the
field itself starts with that character. For example, if ENCLOSED BY
'"' is specified, quotes are handled as shown here:
"The ""BIG"" boss" -> The "BIG" boss The "BIG" boss -> The "BIG" boss The ""BIG"" boss -> The ""BIG"" boss
FIELDS ESCAPED BY controls how to write or read special characters.
If the FIELDS ESCAPED BY character is not empty, it is used to prefix
the following characters on output:
FIELDS ESCAPED BY character
FIELDS [OPTIONALLY] ENCLOSED BY character
FIELDS TERMINATED BY and
LINES TERMINATED BY values
0 (what is actually written following the escape character is
ASCII '0', not a zero-valued byte)
If the FIELDS ESCAPED BY character is empty, no characters are escaped.
It is probably not a good idea to specify an empty escape character,
particularly if field values in your data contain any of the characters in
the list just given.
For input, if the FIELDS ESCAPED BY character is not empty, occurrences
of that character are stripped and the following character is taken literally
as part of a field value. The exceptions are an escaped `0' or
`N' (for example, \0 or \N if the escape character is
`\'). These sequences are interpreted as ASCII 0 (a zero-valued
byte) and NULL. See below for the rules on NULL handling.
For more information about `\'-escape syntax, see section 6.1.1 Literals: How to Write Strings and Numbers.
In certain cases, field and line handling options interact:
LINES TERMINATED BY is an empty string and FIELDS
TERMINATED BY is non-empty, lines are also terminated with
FIELDS TERMINATED BY.
FIELDS TERMINATED BY and FIELDS ENCLOSED BY values
are both empty (''), a fixed-row (non-delimited) format is used.
With fixed-row format, no delimiters are used between fields (but you
can still have a line terminator). Instead, column values are written
and read using the ``display'' widths of the columns. For example, if a
column is declared as INT(7), values for the column are written
using 7-character fields. On input, values for the column are obtained
by reading 7 characters.
LINES TERMINATED BY is still used to separate lines. If a line
don't contain all fields, the rest of the fields will be set to their
default values. If you don't have a line terminator, you should set this
to ''. In this case the text file must contain all fields for
each row.
Fixed-row format also affects handling of NULL values; see below.
Note that fixed-size format will not work if you are using a multi-byte
character set.
Handling of NULL values varies, depending on the FIELDS and
LINES options you use:
FIELDS and LINES values,
NULL is written as \N for output and \N is read
as NULL for input (assuming the ESCAPED BY character
is `\').
FIELDS ENCLOSED BY is not empty, a field containing the literal
word NULL as its value is read as a NULL value (this differs
from the word NULL enclosed within FIELDS ENCLOSED BY
characters, which is read as the string 'NULL').
FIELDS ESCAPED BY is empty, NULL is written as the word
NULL.
FIELDS TERMINATED BY and
FIELDS ENCLOSED BY are both empty), NULL is written as an empty
string. Note that this causes both NULL values and empty strings in
the table to be indistinguishable when written to the file because they are
both written as empty strings. If you need to be able to tell the two apart
when reading the file back in, you should not use fixed-row format.
Some cases are not supported by LOAD DATA INFILE:
FIELDS TERMINATED BY and FIELDS ENCLOSED
BY both empty) and BLOB or TEXT columns.
LOAD DATA INFILE won't be able to interpret the input properly.
For example, the following FIELDS clause would cause problems:
FIELDS TERMINATED BY '"' ENCLOSED BY '"'
FIELDS ESCAPED BY is empty, a field value that contains an occurrence
of FIELDS ENCLOSED BY or LINES TERMINATED BY
followed by the FIELDS TERMINATED BY value will cause LOAD
DATA INFILE to stop reading a field or line too early.
This happens because LOAD DATA INFILE cannot properly determine
where the field or line value ends.
The following example loads all columns of the persondata table:
mysql> LOAD DATA INFILE 'persondata.txt' INTO TABLE persondata;
No field list is specified, so LOAD DATA INFILE expects input rows
to contain a field for each table column. The default FIELDS and
LINES values are used.
If you wish to load only some of a table's columns, specify a field list:
mysql> LOAD DATA INFILE 'persondata.txt'
-> INTO TABLE persondata (col1,col2,...);
You must also specify a field list if the order of the fields in the input file differs from the order of the columns in the table. Otherwise, MySQL cannot tell how to match up input fields with table columns.
If a row has too few fields, the columns for which no input field is present
are set to default values. Default value assignment is described in
section 6.5.3 CREATE TABLE Syntax.
An empty field value is interpreted differently than if the field value is missing:
0.
Note that these are the same values that result if you assign an empty
string explicitly to a string, numeric, or date or time type explicitly
in an INSERT or UPDATE statement.
TIMESTAMP columns are only set to the current date and time if there
is a NULL value for the column, or (for the first TIMESTAMP
column only) if the TIMESTAMP column is left out from the field list
when a field list is specified.
If an input row has too many fields, the extra fields are ignored and
the number of warnings is incremented. Note that before MySQL 4.1.1 the
warnings is just a number to indicate that something went wrong.
In MySQL 4.1.1 you can do SHOW WARNINGS to get more information for
what went wrong.
LOAD DATA INFILE regards all input as strings, so you can't use
numeric values for ENUM or SET columns the way you can with
INSERT statements. All ENUM and SET values must be
specified as strings!
If you are using the C API, you can get information about the query by
calling the API function mysql_info() when the LOAD DATA INFILE
query finishes. The format of the information string is shown here:
Records: 1 Deleted: 0 Skipped: 0 Warnings: 0
Warnings occur under the same circumstances as when values are inserted
via the INSERT statement (see section 6.4.3 INSERT Syntax), except
that LOAD DATA INFILE also generates warnings when there are too few
or too many fields in the input row. The warnings are not stored anywhere;
the number of warnings can only be used as an indication if everything went
well.
If you get warnings and want to know exactly why you got them, one way
to do this is to use SELECT ... INTO OUTFILE into another file
and compare this to your original input file.
If you need LOAD DATA to read from a pipe, you can use the
following trick:
mkfifo /mysql/db/x/x chmod 666 /mysql/db/x/x cat < /dev/tcp/10.1.1.12/4711 > /nt/mysql/db/x/x mysql -e "LOAD DATA INFILE 'x' INTO TABLE x" x
If you are using a version of MySQL older than 3.23.25
you can only do the above with LOAD DATA LOCAL INFILE.
In MySQL 4.1.1 you can use SHOW WARNINGS to get a list of the first
max_error_count warnings. See section 4.5.7.9 SHOW WARNINGS | ERRORS.
For more information about the efficiency of INSERT versus
LOAD DATA INFILE and speeding up LOAD DATA INFILE,
See section 5.2.9 Speed of INSERT Queries.
DO SyntaxDO expression, [expression, ...]
Execute the expression but don't return any results. This is a
shorthand of SELECT expression, expression, but has the advantage
that it's slightly faster when you don't care about the result.
This is mainly useful with functions that has side effects, like
RELEASE_LOCK.
CREATE, DROP, ALTERCREATE DATABASE SyntaxCREATE DATABASE [IF NOT EXISTS] db_name
CREATE DATABASE creates a database with the given name.
Rules for
allowable database names are given in section 6.1.2 Database, Table, Index, Column, and Alias Names. An error occurs if
the database already exists and you didn't specify IF NOT EXISTS.
Databases in MySQL are implemented as directories containing files
that correspond to tables in the database. Because there are no tables in a
database when it is initially created, the CREATE DATABASE statement
only creates a directory under the MySQL data directory.
You can also create databases with mysqladmin.
See section 4.8 MySQL Client-Side Scripts and Utilities.
DROP DATABASE SyntaxDROP DATABASE [IF EXISTS] db_name
DROP DATABASE drops all tables in the database and deletes the
database. If you do a DROP DATABASE on a symbolic linked
database, both the link and the original database is deleted. Be
VERY careful with this command!
DROP DATABASE returns the number of files that were removed from
the database directory. Normally, this is three times the number of
tables, because normally each table corresponds to a `.MYD' file, a
`.MYI' file, and a `.frm' file.
The DROP DATABASE command removes from the given database
directory all files with the following extensions:
| Ext | Ext | Ext | Ext |
| .BAK | .DAT | .HSH | .ISD |
| .ISM | .ISM | .MRG | .MYD |
| .MYI | .db | .frm |
All subdirectories that consists of 2 digits (RAID directories)
are also removed.
In MySQL Version 3.22 or later, you can use the keywords
IF EXISTS to prevent an error from occurring if the database doesn't
exist.
You can also drop databases with mysqladmin. See section 4.8 MySQL Client-Side Scripts and Utilities.
CREATE TABLE Syntax
CREATE [TEMPORARY] TABLE [IF NOT EXISTS] tbl_name [(create_definition,...)]
[table_options] [select_statement]
or
CREATE [TEMPORARY] TABLE [IF NOT EXISTS] tbl_name (LIKE old_table_name);
create_definition:
col_name type [NOT NULL | NULL] [DEFAULT default_value] [AUTO_INCREMENT]
[PRIMARY KEY] [reference_definition]
or PRIMARY KEY (index_col_name,...)
or KEY [index_name] (index_col_name,...)
or INDEX [index_name] (index_col_name,...)
or UNIQUE [INDEX] [index_name] (index_col_name,...)
or FULLTEXT [INDEX] [index_name] (index_col_name,...)
or [CONSTRAINT symbol] FOREIGN KEY [index_name] (index_col_name,...)
[reference_definition]
or CHECK (expr)
type:
TINYINT[(length)] [UNSIGNED] [ZEROFILL]
or SMALLINT[(length)] [UNSIGNED] [ZEROFILL]
or MEDIUMINT[(length)] [UNSIGNED] [ZEROFILL]
or INT[(length)] [UNSIGNED] [ZEROFILL]
or INTEGER[(length)] [UNSIGNED] [ZEROFILL]
or BIGINT[(length)] [UNSIGNED] [ZEROFILL]
or REAL[(length,decimals)] [UNSIGNED] [ZEROFILL]
or DOUBLE[(length,decimals)] [UNSIGNED] [ZEROFILL]
or FLOAT[(length,decimals)] [UNSIGNED] [ZEROFILL]
or DECIMAL(length,decimals) [UNSIGNED] [ZEROFILL]
or NUMERIC(length,decimals) [UNSIGNED] [ZEROFILL]
or CHAR(length) [BINARY]
or VARCHAR(length) [BINARY]
or DATE
or TIME
or TIMESTAMP
or DATETIME
or TINYBLOB
or BLOB
or MEDIUMBLOB
or LONGBLOB
or TINYTEXT
or TEXT
or MEDIUMTEXT
or LONGTEXT
or ENUM(value1,value2,value3,...)
or SET(value1,value2,value3,...)
index_col_name:
col_name [(length)]
reference_definition:
REFERENCES tbl_name [(index_col_name,...)]
[MATCH FULL | MATCH PARTIAL]
[ON DELETE reference_option]
[ON UPDATE reference_option]
reference_option:
RESTRICT | CASCADE | SET NULL | NO ACTION | SET DEFAULT
table_options:
TYPE = {BDB | HEAP | ISAM | InnoDB | MERGE | MRG_MYISAM | MYISAM }
or AUTO_INCREMENT = #
or AVG_ROW_LENGTH = #
or CHECKSUM = {0 | 1}
or COMMENT = "string"
or MAX_ROWS = #
or MIN_ROWS = #
or PACK_KEYS = {0 | 1 | DEFAULT}
or PASSWORD = "string"
or DELAY_KEY_WRITE = {0 | 1}
or ROW_FORMAT= { default | dynamic | fixed | compressed }
or RAID_TYPE= {1 | STRIPED | RAID0 } RAID_CHUNKS=# RAID_CHUNKSIZE=#
or UNION = (table_name,[table_name...])
or INSERT_METHOD= {NO | FIRST | LAST }
or DATA DIRECTORY="absolute path to directory"
or INDEX DIRECTORY="absolute path to directory"
select_statement:
[IGNORE | REPLACE] SELECT ... (Some legal select statement)
CREATE TABLE
creates a table with the given name in the current database.
Rules for allowable table names are given in section 6.1.2 Database, Table, Index, Column, and Alias Names. An error occurs if there is no current database or if the table already exists.
In MySQL Version 3.22 or later, the table name can be specified as
db_name.tbl_name. This works regardless of whether there is a
current database.
From MySQL Version 3.23, you can use the TEMPORARY keyword when
you create a table. The name is restricted to the current connection, and
the temporary table will automatically be deleted when the connection is closed.
This means that two different
connections can both use the same temporary table name without conflicting
with each other or with an existing table of the same name. (The existing table
is hidden until the temporary table is deleted.). From MySQL 4.0.2 one must
have the CREATE TEMPORARY TABLES privilege to be able to create
temporary tables.
In MySQL Version 3.23 or later, you can use the keywords
IF NOT EXISTS so that an error does not occur if the table already
exists. Note that there is no verification that the table structures are
identical.
In MySQL 4.1 you can use LIKE to create a table based on a table
definition in another table. In MySQL 4.1 you can also specify the
type for a generated column:
CREATE TABLE foo (a tinyint not null) SELECT b+1 AS 'a' FROM bar;
Each table tbl_name is represented by some files in the database
directory. In the case of MyISAM-type tables you will get:
| File | Purpose |
tbl_name.frm | Table definition (form) file |
tbl_name.MYD | Datafile |
tbl_name.MYI | Index file |
For more information on the properties of the various column types, see section 6.2 Column Types:
NULL nor NOT NULL is specified, the column
is treated as though NULL had been specified.
AUTO_INCREMENT.
When you insert a value of NULL (recommended) or 0 into an
AUTO_INCREMENT column, the column is set to value+1, where
value is the largest value for the column currently in the table.
AUTO_INCREMENT sequences begin with 1.
See section 8.1.3.130 mysql_insert_id().
If you delete the row containing the maximum value for an
AUTO_INCREMENT column, the value will be reused with an
ISAM, or BDB table but not with a
MyISAM or InnoDB table. If you delete all rows in the table
with DELETE FROM table_name (without a WHERE) in
AUTOCOMMIT mode, the sequence starts over for all table types.
Note: there can be only one AUTO_INCREMENT column per
table, it must be indexed and it can't have a DEFAULT value.
MySQL Version 3.23 will also only work properly if the
AUTO_INCREMENT column only has positive values. Inserting a
negative number is regarded as inserting a very large positive number.
This is done to avoid precision problems when numbers 'wrap' over from
positive to negative and also to ensure that one doesn't accidentally
get an AUTO_INCREMENT column that contains 0.
In MyISAM and BDB tables you can specify AUTO_INCREMENT secondary
column in a multi-column key. See section 3.5.9 Using AUTO_INCREMENT.
To make MySQL compatible with some ODBC applications, you can find
the last inserted row with the following query:
SELECT * FROM tbl_name WHERE auto_col IS NULL
CREATE TABLE automatically commits the current InnoDB
transaction if MySQL binary logging is used.
NULL values are handled differently for TIMESTAMP columns than
for other column types. You cannot store a literal NULL in a
TIMESTAMP column; setting the column to NULL sets it to the
current date and time. Because TIMESTAMP columns behave this way, the
NULL and NOT NULL attributes do not apply in the normal way and
are ignored if you specify them.
On the other hand, to make it easier for MySQL clients to use
TIMESTAMP columns, the server reports that such columns may be
assigned NULL values (which is true), even though TIMESTAMP
never actually will contain a NULL value. You can see this when you
use DESCRIBE tbl_name to get a description of your table.
Note that setting a TIMESTAMP column to 0 is not the same
as setting it to NULL, because 0 is a valid TIMESTAMP
value.
DEFAULT value has to be a constant, it cannot be a function or
an expression.
If no DEFAULT value is specified for a column, MySQL
automatically assigns one, as follows.
If the column may take NULL as a value, the default value is
NULL.
If the column is declared as NOT NULL, the default value depends on
the column type:
AUTO_INCREMENT
attribute, the default is 0. For an AUTO_INCREMENT column, the
default value is the next value in the sequence.
TIMESTAMP, the default is the
appropriate zero value for the type. For the first TIMESTAMP
column in a table, the default value is the current date and time.
See section 6.2.2 Date and Time Types.
ENUM, the default value is the empty
string. For ENUM, the default is the first enumeration value.
NOW() or CURRENT_DATE.
KEY is a synonym for INDEX.
UNIQUE key can have only distinct values. An
error occurs if you try to add a new row with a key that matches an existing
row.
PRIMARY KEY is a unique KEY where all key columns must be
defined as NOT NULL. If they are not explicitly declared as
NOT NULL, it will be done implicitly (and quietly). In MySQL
the key is named PRIMARY. A table can have only one PRIMARY KEY.
If you don't have a PRIMARY KEY and some applications ask for the
PRIMARY KEY in your tables, MySQL will return the first
UNIQUE key, which doesn't have any NULL columns, as the
PRIMARY KEY.
PRIMARY KEY can be a multiple-column index. However, you cannot
create a multiple-column index using the PRIMARY KEY key attibute in a
column specification. Doing so will mark only that single column as primary.
You must use the PRIMARY KEY(index_col_name, ...) syntax.
PRIMARY or UNIQUE key consists of only one column and this
is of type integer, you can also refer to it as _rowid
(new in Version 3.23.11).
index_col_name, with an optional suffix (_2,
_3, ...) to make it unique. You can see index names for a
table using SHOW INDEX FROM tbl_name.
See section 4.5.7.1 Retrieving information about Database, Tables, Columns, and Indexes.
MyISAM, InnoDB, and BDB table types support indexes on columns that can have
NULL values. In other cases you must declare such columns
NOT NULL or an error results.
col_name(length) syntax, you can specify an index that
uses only a part of a CHAR or VARCHAR column. This can
make the index file much smaller.
See section 5.4.4 Column Indexes.
MyISAM table type supports indexing on BLOB and
TEXT columns. When putting an index on a BLOB or TEXT
column you MUST always specify the length of the index:
CREATE TABLE test (blob_col BLOB, INDEX(blob_col(10)));
ORDER BY or GROUP BY with a TEXT or
BLOB column, only the first max_sort_length bytes are used.
See section 6.2.3.2 The BLOB and TEXT Types.
FULLTEXT indexes. They are used for full-text search. Only the
MyISAM table type supports FULLTEXT indexes. They can be created
only from CHAR, VARCHAR, and TEXT columns.
Indexing always happens over the entire column; partial indexing is not
supported. See section 6.8 MySQL Full-text Search for details of operation.
InnoDB tables support checking of
foreign key constraints. See section 7.5 InnoDB Tables. Note that the
FOREIGN KEY syntax in InnoDB is more restricted than
the syntax presented above. InnoDB does not allow
index_name to be specified, and the columns of the referenced
table always have to be explicitly named. Starting from
4.0.8 InnoDB supports both ON DELETE and ON UPDATE
actions on foreign keys.
See the InnoDB manual section for the precise syntax. See section 7.5 InnoDB Tables.
For other table types, MySQL Server does parse the FOREIGN KEY,
CHECK, and REFERENCES syntax in CREATE TABLE commands,
but without further action being taken. See section 1.8.4.5 Foreign Keys.
NULL column takes one bit extra, rounded up to the nearest byte.
row length = 1
+ (sum of column lengths)
+ (number of NULL columns + 7)/8
+ (number of variable-length columns)
table_options and SELECT options are only
implemented in MySQL Version 3.23 and above.
The different table types are:
| Table type | Description |
| BDB or BerkeleyDB | Transaction-safe tables with page locking. See section 7.6 BDB or BerkeleyDB Tables.
|
| HEAP | The data for this table is only stored in memory. See section 7.4 HEAP Tables.
|
| ISAM | The original storage engine. See section 7.3 ISAM Tables.
|
| InnoDB | Transaction-safe tables with row locking. See section 7.5 InnoDB Tables.
|
| MERGE | A collection of MyISAM tables used as one table. See section 7.2 MERGE Tables.
|
| MRG_MyISAM | An alias for MERGE tables |
| MyISAM | The new binary portable storage engine that is replacing ISAM. See section 7.1 MyISAM Tables.
|
TYPE=BDB is specified, and that distribution
of MySQL does not support BDB tables, the table will be created
as MyISAM instead. This is to make it possible to have a replication
setup where you have transactional tables on the master but tables created
on the slave are non-transactional (to get more speed). In MySQL 4.1.1 you
get a warning if the specified table type is not honored.
The other table options are used to optimise the behaviour of the
table. In most cases, you don't have to specify any of them.
The options work for all table types, if not otherwise indicated:
| Option | Description |
AUTO_INCREMENT | The next AUTO_INCREMENT value you want to set for your table (MyISAM only, to set the first auto-inc value for an InnoDB table insert a dummy row with a value one less, and delete the dummy row).
|
AVG_ROW_LENGTH | An approximation of the average row length for your table. You only need to set this for large tables with variable size records. |
CHECKSUM | Set this to 1 if you want MySQL to maintain a checksum for all rows (makes the table a little slower to update but makes it easier to find corrupted tables) (MyISAM). |
COMMENT | A 60-character comment for your table. |
MAX_ROWS | Max number of rows you plan to store in the table. |
MIN_ROWS | Minimum number of rows you plan to store in the table. |
PACK_KEYS | Set this to 1 if you want to have a smaller index. This usually makes updates slower and reads faster (MyISAM, ISAM). Setting this to 0 will disable all packing of keys. Setting this to DEFAULT (MySQL 4.0) will tell the storage engine to only pack long CHAR/VARCHAR columns.
|
PASSWORD | Encrypt the `.frm' file with a password. This option doesn't do anything in the standard MySQL version. |
DELAY_KEY_WRITE | Set this to 1 if want to delay key table updates until the table is closed (MyISAM). |
ROW_FORMAT | Defines how the rows should be stored. Currently this option only works with MyISAM tables, which supports the DYNAMIC and FIXED row formats. See section 7.1.2 MyISAM Table Formats.
|
MyISAM table, MySQL uses the product of
max_rows * avg_row_length to decide how big the resulting table
will be. If you don't specify any of the above options, the maximum size
for a table will be 4G (or 2G if your operating systems only supports 2G
tables). The reason for this is just to keep down the pointer sizes
to make the index smaller and faster if you don't really need big files.
If you don't use PACK_KEYS, the default is to only pack strings,
not numbers. If you use PACK_KEYS=1, numbers will be packed as well.
When packing binary number keys, MySQL will use prefix compression.
This means that you will only get a big benefit of this if you have
many numbers that are the same. Prefix compression means that every
key needs one extra byte to indicate how many bytes of the previous key are
the same for the next key (note that the pointer to the row is stored
in high-byte-first-order directly after the key, to improve
compression). This means that if you have many equal keys on two rows
in a row, all following 'same' keys will usually only take 2 bytes
(including the pointer to the row). Compare this to the ordinary case
where the following keys will take storage_size_for_key +
pointer_size (usually 4). On the other hand, if all keys are
totally different, you will lose 1 byte per key, if the key isn't a
key that can have NULL values. (In this case the packed key length will
be stored in the same byte that is used to mark if a key is NULL.)
SELECT after the CREATE statement,
MySQL will create new fields for all elements in the
SELECT. For example:
mysql> CREATE TABLE test (a INT NOT NULL AUTO_INCREMENT,
-> PRIMARY KEY (a), KEY(b))
-> TYPE=MyISAM SELECT b,c FROM test2;
This will create a MyISAM table with three columns, a, b, and c.
Notice that the columns from the SELECT statement are appended to
the right side of the table, not overlapped onto it. Take the following
example:
mysql> SELECT * FROM foo; +---+ | n | +---+ | 1 | +---+ mysql> CREATE TABLE bar (m INT) SELECT n FROM foo; Query OK, 1 row affected (0.02 sec) Records: 1 Duplicates: 0 Warnings: 0 mysql> SELECT * FROM bar; +------+---+ | m | n | +------+---+ | NULL | 1 | +------+---+ 1 row in set (0.00 sec)For each row in table
foo, a row is inserted in bar with
the values from foo and default values for the new columns.
CREATE TABLE ... SELECT will not automatically create any indexes
for you. This is done intentionally to make the command as flexible as
possible. If you want to have indexes in the created table, you should
specify these before the SELECT statement:
mysql> CREATE TABLE bar (UNIQUE (n)) SELECT n FROM foo;If any errors occur while copying the data to the table, it will automatically be deleted. To ensure that the update log/binary log can be used to re-create the original tables, MySQL will not allow concurrent inserts during
CREATE TABLE ... SELECT.
RAID_TYPE option will help you to break the 2G/4G limit for
the MyISAM datafile (not the index file) on operating systems that
don't support big files. Note that this option is not recommended for
filesystem that supports big files!
You can get more speed from the I/O bottleneck by putting RAID
directories on different physical disks. RAID_TYPE will work on
any OS, as long as you have configured MySQL with --with-raid.
For now the only allowed RAID_TYPE is STRIPED (1
and RAID0 are aliases for this).
If you specify RAID_TYPE=STRIPED for a MyISAM table,
MyISAM will create RAID_CHUNKS subdirectories named 00,
01, 02 in the database directory. In each of these directories
MyISAM will create a table_name.MYD. When writing data
to the datafile, the RAID handler will map the first
RAID_CHUNKSIZE *1024 bytes to the first file, the next
RAID_CHUNKSIZE *1024 bytes to the next file and so on.
UNION is used when you want to use a collection of identical
tables as one. This only works with MERGE tables.
See section 7.2 MERGE Tables.
For the moment you need to have SELECT, UPDATE, and
DELETE privileges on the tables you map to a MERGE table.
All mapped tables must be in the same database as the MERGE table.
MERGE table, you have to specify with
INSERT_METHOD into with table the row should be inserted.
See section 7.2 MERGE Tables. This option was introduced in MySQL 4.0.0.
PRIMARY key will be placed first, followed
by all UNIQUE keys and then the normal keys. This helps the
MySQL optimiser to prioritise which key to use and also more quickly
detect duplicated UNIQUE keys.
DATA DIRECTORY="directory" or INDEX
DIRECTORY="directory" you can specify where the storage engine should
put it's table and index files. Note that the directory should be a full
path to the directory (not relative path).
This only works for MyISAM tables in MySQL 4.0, when you
are not using the --skip-symlink option. See section 5.6.1.2 Using Symbolic Links for Tables.
In some cases, MySQL silently changes a column specification from
that given in a CREATE TABLE statement. (This may also occur with
ALTER TABLE.):
VARCHAR columns with a length less than four are changed to
CHAR.
VARCHAR, TEXT, or BLOB),
all CHAR columns longer than three characters are changed to
VARCHAR columns. This doesn't affect how you use the columns in
any way; in MySQL, VARCHAR is just a different way to
store characters. MySQL performs this conversion because it
saves space and makes table operations faster. See section 7 MySQL Table Types.
TIMESTAMP display sizes must be even and in the range from 2 to 14.
If you specify a display size of 0 or greater than 14, the size is coerced
to 14. Odd-valued sizes in the range from 1 to 13 are coerced
to the next higher even number.
NULL in a TIMESTAMP column; setting
it to NULL sets it to the current date and time. Because
TIMESTAMP columns behave this way, the NULL and NOT NULL
attributes do not apply in the normal way and are ignored if you specify
them. DESCRIBE tbl_name always reports that a TIMESTAMP
column may be assigned NULL values.
If you want to see whether MySQL used a column type other
than the one you specified, issue a DESCRIBE tbl_name statement after
creating or altering your table.
Certain other column type changes may occur if you compress a table
using myisampack. See section 7.1.2.3 Compressed Table Characteristics.
ALTER TABLE Syntax
ALTER [IGNORE] TABLE tbl_name alter_spec [, alter_spec ...]
alter_specification:
ADD [COLUMN] create_definition [FIRST | AFTER column_name ]
or ADD [COLUMN] (create_definition, create_definition,...)
or ADD INDEX [index_name] (index_col_name,...)
or ADD PRIMARY KEY (index_col_name,...)
or ADD UNIQUE [index_name] (index_col_name,...)
or ADD FULLTEXT [index_name] (index_col_name,...)
or ADD [CONSTRAINT symbol] FOREIGN KEY [index_name] (index_col_name,...)
[reference_definition]
or ALTER [COLUMN] col_name {SET DEFAULT literal | DROP DEFAULT}
or CHANGE [COLUMN] old_col_name create_definition
[FIRST | AFTER column_name]
or MODIFY [COLUMN] create_definition [FIRST | AFTER column_name]
or DROP [COLUMN] col_name
or DROP PRIMARY KEY
or DROP INDEX index_name
or DISABLE KEYS
or ENABLE KEYS
or RENAME [TO] new_tbl_name
or ORDER BY col
or table_options
ALTER TABLE allows you to change the structure of an existing table.
For example, you can add or delete columns, create or destroy indexes, change
the type of existing columns, or rename columns or the table itself. You can
also change the comment for the table and type of the table.
See section 6.5.3 CREATE TABLE Syntax.
If you use ALTER TABLE to change a column specification but
DESCRIBE tbl_name indicates that your column was not changed, it is
possible that MySQL ignored your modification for one of the reasons
described in section 6.5.3.1 Silent Column Specification Changes. For example, if you try to change
a VARCHAR column to CHAR, MySQL will still use
VARCHAR if the table contains other variable-length columns.
ALTER TABLE works by making a temporary copy of the original table.
The alteration is performed on the copy, then the original table is
deleted and the new one is renamed. This is done in such a way that
all updates are automatically redirected to the new table without
any failed updates. While ALTER TABLE is executing, the original
table is readable by other clients. Updates and writes to the table
are stalled until the new table is ready.
Note that if you use any other option to ALTER TABLE than
RENAME, MySQL will always create a temporary table, even
if the data wouldn't strictly need to be copied (like when you change the
name of a column). We plan to fix this in the future, but as one doesn't
normally do ALTER TABLE that often this isn't that high on our TODO.
For MyISAM tables, you can speed up the index recreation part (which is the
slowest part of the recreation process) by setting the
myisam_sort_buffer_size variable to a high value.
ALTER TABLE, you need ALTER, INSERT,
and CREATE privileges on the table.
IGNORE is a MySQL extension to SQL-92.
It controls how ALTER TABLE works if there are duplicates on
unique keys in the new table.
If IGNORE isn't specified, the copy is aborted and rolled back.
If IGNORE is specified, then for rows with duplicates on a unique
key, only the first row is used; the others are deleted.
ADD, ALTER, DROP, and
CHANGE clauses in a single ALTER TABLE statement. This is a
MySQL extension to SQL-92, which allows only one of each clause
per ALTER TABLE statement.
CHANGE col_name, DROP col_name, and DROP
INDEX are MySQL extensions to SQL-92.
MODIFY is an Oracle extension to ALTER TABLE.
COLUMN is a pure noise word and can be omitted.
ALTER TABLE tbl_name RENAME TO new_name without any other
options, MySQL simply renames the files that correspond to the table
tbl_name. There is no need to create the temporary table.
See section 6.5.5 RENAME TABLE Syntax.
create_definition clauses use the same syntax for ADD and
CHANGE as for CREATE TABLE. Note that this syntax includes
the column name, not just the column type.
See section 6.5.3 CREATE TABLE Syntax.
CHANGE old_col_name create_definition
clause. To do so, specify the old and new column names and the type that
the column currently has. For example, to rename an INTEGER column
from a to b, you can do this:
mysql> ALTER TABLE t1 CHANGE a b INTEGER;If you want to change a column's type but not the name,
CHANGE
syntax still requires two column names even if they are the same. For
example:
mysql> ALTER TABLE t1 CHANGE b b BIGINT NOT NULL;However, as of MySQL Version 3.22.16a, you can also use
MODIFY
to change a column's type without renaming it:
mysql> ALTER TABLE t1 MODIFY b BIGINT NOT NULL;
CHANGE or MODIFY to shorten a column for which
an index exists on part of the column (for instance, if you have an index
on the first 10 characters of a VARCHAR column), you cannot make
the column shorter than the number of characters that are indexed.
CHANGE or MODIFY,
MySQL tries to convert data to the new type as well as possible.
FIRST or
ADD ... AFTER col_name to add a column at a specific position
within a table row. The default is to add the column last.
From MySQL Version 4.0.1, you can also use the FIRST and
AFTER keywords in CHANGE or MODIFY.
ALTER COLUMN specifies a new default value for a column
or removes the old default value.
If the old default is removed and the column can be NULL, the new
default is NULL. If the column cannot be NULL, MySQL
assigns a default value, as described in
section 6.5.3 CREATE TABLE Syntax.
DROP INDEX removes an index. This is a MySQL extension to
SQL-92. See section 6.5.8 DROP INDEX Syntax.
DROP TABLE instead.
DROP PRIMARY KEY drops the primary index. If no such
index exists, it drops the first UNIQUE index in the table.
(MySQL marks the first UNIQUE key as the PRIMARY KEY
if no PRIMARY KEY was specified explicitly.)
If you add a UNIQUE INDEX or PRIMARY KEY to a table, this
is stored before any not UNIQUE index so that MySQL can detect
duplicate keys as early as possible.
ORDER BY allows you to create the new table with the rows in a
specific order. Note that the table will not remain in this order after
inserts and deletes. In some cases, it may make sorting easier for
MySQL if the table is in order by the column that you wish to
order it by later. This option is mainly useful when you know that you
are mostly going to query the rows in a certain order; by using this
option after big changes to the table, you may be able to get higher
performance.
ALTER TABLE on a MyISAM table, all non-unique
indexes are created in a separate batch (like in REPAIR).
This should make ALTER TABLE much faster when you have many indexes.
ALTER TABLE ... DISABLE KEYS makes MySQL to stop updating
non-unique indexes for MyISAM table.
ALTER TABLE ... ENABLE KEYS then should be used to recreate missing
indexes. As MySQL does it with special algorithm which is much
faster then inserting keys one by one, disabling keys could give a
considerable speedup on bulk inserts.
mysql_info(), you can find out how many
records were copied, and (when IGNORE is used) how many records were
deleted due to duplication of unique key values.
FOREIGN KEY, CHECK, and REFERENCES clauses don't
actually do anything, except for InnoDB type tables which support
ADD CONSTRAINT FOREIGN KEY (...) REFERENCES ... (...).
Note that InnoDB does not allow an index_name
to be specified. See section 7.5 InnoDB Tables.
The syntax for other table types is provided only for compatibility,
to make it easier to port code from other SQL servers and to run applications
that create tables with references.
See section 1.8.4 MySQL Differences Compared To SQL-92.
Here is an example that shows some of the uses of ALTER TABLE. We
begin with a table t1 that is created as shown here:
mysql> CREATE TABLE t1 (a INTEGER,b CHAR(10));
To rename the table from t1 to t2:
mysql> ALTER TABLE t1 RENAME t2;
To change column a from INTEGER to TINYINT NOT NULL
(leaving the name the same), and to change column b from
CHAR(10) to CHAR(20) as well as renaming it from b to
c:
mysql> ALTER TABLE t2 MODIFY a TINYINT NOT NULL, CHANGE b c CHAR(20);
To add a new TIMESTAMP column named d:
mysql> ALTER TABLE t2 ADD d TIMESTAMP;
To add an index on column d, and make column a the primary key:
mysql> ALTER TABLE t2 ADD INDEX (d), ADD PRIMARY KEY (a);
To remove column c:
mysql> ALTER TABLE t2 DROP COLUMN c;
To add a new AUTO_INCREMENT integer column named c:
mysql> ALTER TABLE t2 ADD c INT UNSIGNED NOT NULL AUTO_INCREMENT,
ADD INDEX (c);
Note that we indexed c, because AUTO_INCREMENT columns must be
indexed, and also that we declare c as NOT NULL, because
indexed columns cannot be NULL.
When you add an AUTO_INCREMENT column, column values are filled in
with sequence numbers for you automatically. You can set the first
sequence number by executing SET INSERT_ID=# before
ALTER TABLE or using the AUTO_INCREMENT = # table option.
See section 5.5.6 SET Syntax.
With MyISAM tables, if you don't change the AUTO_INCREMENT
column, the sequence number will not be affected. If you drop an
AUTO_INCREMENT column and then add another AUTO_INCREMENT
column, the numbers will start from 1 again.
See section A.6.1 Problems with ALTER TABLE..
RENAME TABLE SyntaxRENAME TABLE tbl_name TO new_tbl_name[, tbl_name2 TO new_tbl_name2,...]
The rename is done atomically, which means that no other thread can access any of the tables while the rename is running. This makes it possible to replace a table with an empty one :
CREATE TABLE new_table (...); RENAME TABLE old_table TO backup_table, new_table TO old_table;
The rename is done from left to right, which means that if you want to swap two tables names, you have to:
RENAME TABLE old_table TO backup_table,
new_table TO old_table,
backup_table TO new_table;
As long as two databases are on the same disk you can also rename from one database to another:
RENAME TABLE current_db.tbl_name TO other_db.tbl_name;
When you execute RENAME, you can't have any locked tables or
active transactions. You must also have the ALTER and DROP
privileges on the original table, and the CREATE and INSERT
privileges on the new table.
If MySQL encounters any errors in a multiple-table rename, it will do a reverse rename for all renamed tables to get everything back to the original state.
RENAME TABLE was added in MySQL 3.23.23.
DROP TABLE SyntaxDROP [TEMPORARY] TABLE [IF EXISTS] tbl_name [, tbl_name,...] [RESTRICT | CASCADE]
DROP TABLE removes one or more tables. All table data and the table
definition are removed, so be careful with this command!
In MySQL Version 3.22 or later, you can use the keywords
IF EXISTS to prevent an error from occurring for tables that don't
exist. In 4.1 one gets a NOTE for all not existing tables when using
IF EXISTS. See section 4.5.7.9 SHOW WARNINGS | ERRORS.
RESTRICT and CASCADE are allowed to make porting easier.
For the moment they don't do anything.
Note: DROP TABLE will automatically commit current
active transaction (except if you are using 4.1 and the TEMPORARY
key word.
Option TEMPORARY is ignored in 4.0. In 4.1 this option works as
follows:
Using TEMPORARY is a good way to ensure that you don't accidently
drop a real table.
CREATE INDEX Syntax
CREATE [UNIQUE|FULLTEXT] INDEX index_name
ON tbl_name (col_name[(length)],... )
The CREATE INDEX statement doesn't do anything in MySQL prior
to Version 3.22. In Version 3.22 or later, CREATE INDEX is mapped to an
ALTER TABLE statement to create indexes.
See section 6.5.4 ALTER TABLE Syntax.
Normally, you create all indexes on a table at the time the table itself
is created with CREATE TABLE.
See section 6.5.3 CREATE TABLE Syntax.
CREATE INDEX allows you to add indexes to existing tables.
A column list of the form (col1,col2,...) creates a multiple-column
index. Index values are formed by concatenating the values of the given
columns.
For CHAR and VARCHAR columns, indexes can be created that
use only part of a column, using col_name(length) syntax. (On
BLOB and TEXT columns the length is required.) The
statement shown here creates an index using the first 10 characters of
the name column:
mysql> CREATE INDEX part_of_name ON customer (name(10));
Because most names usually differ in the first 10 characters, this index should
not be much slower than an index created from the entire name column.
Also, using partial columns for indexes can make the index file much smaller,
which could save a lot of disk space and might also speed up INSERT
operations!
Note that you can only add an index on a column that can have NULL
values or on a BLOB/TEXT column if you are using
MySQL Version 3.23.2 or newer and are using the MyISAM
table type.
For more information about how MySQL uses indexes, see section 5.4.3 How MySQL Uses Indexes.
FULLTEXT indexes can index only VARCHAR and
TEXT columns, and only in MyISAM tables. FULLTEXT indexes
are available in MySQL Version 3.23.23 and later.
section 6.8 MySQL Full-text Search.
DROP INDEX SyntaxDROP INDEX index_name ON tbl_name
DROP INDEX drops the index named index_name from the table
tbl_name. DROP INDEX doesn't do anything in MySQL
prior to Version 3.22. In Version 3.22 or later, DROP INDEX is mapped to an
ALTER TABLE statement to drop the index.
See section 6.5.4 ALTER TABLE Syntax.
USE SyntaxUSE db_name
The USE db_name statement tells MySQL to use the db_name
database as the default database for subsequent queries. The database remains
current until the end of the session or until another USE statement
is issued:
mysql> USE db1; mysql> SELECT COUNT(*) FROM mytable; # selects from db1.mytable mysql> USE db2; mysql> SELECT COUNT(*) FROM mytable; # selects from db2.mytable
Making a particular database current by means of the USE statement
does not preclude you from accessing tables in other databases. The following example
accesses the author table from the db1 database and the
editor table from the db2 database:
mysql> USE db1;
mysql> SELECT author_name,editor_name FROM author,db2.editor
-> WHERE author.editor_id = db2.editor.editor_id;
The USE statement is provided for Sybase compatibility.
DESCRIBE Syntax (Get Information About Columns)
{DESCRIBE | DESC} tbl_name [col_name | wild]
DESCRIBE is a shortcut for SHOW COLUMNS FROM.
See section 4.5.7.1 Retrieving information about Database, Tables, Columns, and Indexes.
DESCRIBE provides information about a table's columns. col_name
may be a column name or a string containing the SQL `%' and `_'
wildcard characters. There is no need to enclose the string in quotes.
If the column types are different from what you expect them to be based on a
CREATE TABLE statement, note that MySQL sometimes
changes column types. See section 6.5.3.1 Silent Column Specification Changes.
This statement is provided for Oracle compatibility.
The SHOW statement provides similar information.
See section 4.5.7 SHOW Syntax.
BEGIN/COMMIT/ROLLBACK Syntax
By default, MySQL runs in autocommit mode. This means that
as soon as you execute an update, MySQL will store the update on
disk.
If you are using transactions safe tables (like InnoDB,
BDB, you can put MySQL into
non-autocommit mode with the following command:
SET AUTOCOMMIT=0
After this you must use COMMIT to store your changes to disk or
ROLLBACK if you want to ignore the changes you have made since
the beginning of your transaction.
If you want to switch from AUTOCOMMIT mode for one series of
statements, you can use the START TRANSACTION or
BEGIN or BEGIN WORK statement
:
START TRANSACTION; SELECT @A:=SUM(salary) FROM table1 WHERE type=1; UPDATE table2 SET summmary=@A WHERE type=1; COMMIT;
START TRANSACTION was added to MySQL 4.0.11; This is the recommended
way to start an ad-hoc transaction as this is SQL-99 syntax,
Note that if you are using non-transaction-safe tables, the changes will be
stored at once, independent of the status of the autocommit mode.
If you do a ROLLBACK when you have updated a non-transactional
table you will get an error (ER_WARNING_NOT_COMPLETE_ROLLBACK) as
a warning. All transaction-safe tables will be restored but any
non-transaction-safe table will not change.
If you are using START TRANSACTION or SET AUTOCOMMIT=0, you
should use the MySQL binary log for backups instead of the
older update log. Transactions are stored in the binary log
in one chunk, upon COMMIT, to ensure that transactions which are
rolled back are not stored. See section 4.9.4 The Binary Log.
The following commands automatically end a transaction (as if you had done
a COMMIT before executing the command):
| Command | Command | Command |
ALTER TABLE | BEGIN | CREATE INDEX
|
DROP DATABASE | DROP TABLE | RENAME TABLE
|
TRUNCATE |
You can change the isolation level for transactions with
SET TRANSACTION ISOLATION LEVEL .... See section 6.7.3 SET TRANSACTION Syntax.
LOCK TABLES/UNLOCK TABLES Syntax
LOCK TABLES tbl_name [AS alias] {READ [LOCAL] | [LOW_PRIORITY] WRITE}
[, tbl_name [AS alias] {READ [LOCAL] | [LOW_PRIORITY] WRITE} ...]
...
UNLOCK TABLES
LOCK TABLES locks tables for the current thread. UNLOCK
TABLES releases any locks held by the current thread. All tables that
are locked by the current thread are automatically unlocked when the
thread issues another LOCK TABLES, or when the connection to the
server is closed.
To use LOCK TABLES in MySQL 4.0.2 you need the global
LOCK TABLES privilege and a SELECT privilege on the
involved tables. In MySQL 3.23 you need to have SELECT,
insert, DELETE and UPDATE privileges for the
tables.
The main reasons to use LOCK TABLES are for emulating transactions
or getting more speed when updating tables. This is explained in more
detail later.
If a thread obtains a READ lock on a table, that thread (and all other
threads) can only read from the table. If a thread obtains a WRITE
lock on a table, then only the thread holding the lock can read from
or write to the table. Other threads are blocked.
The difference between READ LOCAL and READ is that
READ LOCAL allows non-conflicting INSERT statements to
execute while the lock is held. This can't however be used if you are
going to manipulate the database files outside MySQL while you
hold the lock.
When you use LOCK TABLES, you must lock all tables that you are
going to use and you must use the same alias that you are going to use
in your queries! If you are using a table multiple times in a query
(with aliases), you must get a lock for each alias!
WRITE locks normally have higher priority than READ locks, to
ensure that updates are processed as soon as possible. This means that if one
thread obtains a READ lock and then another thread requests a
WRITE lock, subsequent READ lock requests will wait until the
WRITE thread has gotten the lock and released it. You can use
LOW_PRIORITY WRITE locks to allow other threads to obtain READ
locks while the thread is waiting for the WRITE lock. You should only
use LOW_PRIORITY WRITE locks if you are sure that there will
eventually be a time when no threads will have a READ lock.
LOCK TABLES works as follows:
This policy ensures that table locking is deadlock free. There is however other things one needs to be aware of with this schema:
If you are using a LOW_PRIORITY WRITE lock for a table, this
means only that MySQL will wait for this particlar lock until
there is no threads that wants a READ lock. When the thread has
got the WRITE lock and is waiting to get the lock for the next
table in the lock table list, all other threads will wait for the
WRITE lock to be released. If this becomes a serious problem
with your application, you should consider converting some of your
tables to transactions safe tables.
You can safely kill a thread that is waiting for a table lock with
KILL. See section 4.5.6 KILL Syntax.
Note that you should not lock any tables that you are using with
INSERT DELAYED. This is because that in this case the INSERT
is done by a separate thread.
Normally, you don't have to lock tables, as all single UPDATE statements
are atomic; no other thread can interfere with any other currently executing
SQL statement. There are a few cases when you would like to lock tables
anyway:
READ-locked table and no other
thread can read a WRITE-locked table.
The reason some things are faster under LOCK TABLES is that
MySQL will not flush the key cache for the locked tables until
UNLOCK TABLES is called (normally the key cache is flushed after
each SQL statement). This speeds up inserting/updateing/deletes on
MyISAM tables.
LOCK TABLES if you want to ensure that
no other thread comes between a SELECT and an UPDATE. The
example shown here requires LOCK TABLES in order to execute safely:
mysql> LOCK TABLES trans READ, customer WRITE;
mysql> SELECT SUM(value) FROM trans WHERE customer_id=some_id;
mysql> UPDATE customer SET total_value=sum_from_previous_statement
-> WHERE customer_id=some_id;
mysql> UNLOCK TABLES;
Without LOCK TABLES, there is a chance that another thread might
insert a new row in the trans table between execution of the
SELECT and UPDATE statements.
By using incremental updates (UPDATE customer SET
value=value+new_value) or the LAST_INSERT_ID() function, you can
avoid using LOCK TABLES in many cases.
You can also solve some cases by using the user-level lock functions
GET_LOCK() and RELEASE_LOCK(). These locks are saved in a hash
table in the server and implemented with pthread_mutex_lock() and
pthread_mutex_unlock() for high speed.
See section 6.3.6.2 Miscellaneous Functions.
See section 5.3.1 How MySQL Locks Tables, for more information on locking policy.
You can lock all tables in all databases with read locks with the
FLUSH TABLES WITH READ LOCK command. See section 4.5.3 FLUSH Syntax. This is very
convenient way to get backups if you have a filesystem, like Veritas,
that can take snapshots in time.
NOTE: LOCK TABLES is not transaction-safe and will
automatically commit any active transactions before attempting to lock the
tables.
SET TRANSACTION Syntax
SET [GLOBAL | SESSION] TRANSACTION ISOLATION LEVEL
{ READ UNCOMMITTED | READ COMMITTED | REPEATABLE READ | SERIALIZABLE }
Sets the transaction isolation level for the global, whole session or the next transaction.
The default behaviour is to set the isolation level for the next (not
started) transaction. If you use the GLOBAL keyword, the statement
sets the default transaction level globally for all new connections
created from that point on. You will need the SUPER
privilege to do this. Using the SESSION keyword sets the
default transaction level for all future transactions performed on the
current connection.
You can set the default global isolation level for mysqld with
--transaction-isolation=.... See section 4.1.1 mysqld Command-line Options.
As of Version 3.23.23, MySQL has support for full-text indexing
and searching. Full-text indexes in MySQL are an index of type
FULLTEXT. FULLTEXT indexes are used with MyISAM tables
and can be created from CHAR, VARCHAR,
or TEXT columns at CREATE TABLE time or added later with
ALTER TABLE or CREATE INDEX. For large datasets, it will be
much faster to load your data into a table that has no FULLTEXT
index, then create the index with ALTER TABLE (or CREATE
INDEX). Loading data into a table that already has a FULLTEXT
index will be slower.
Full-text searching is performed with the MATCH() function.
mysql> CREATE TABLE articles (
-> id INT UNSIGNED AUTO_INCREMENT NOT NULL PRIMARY KEY,
-> title VARCHAR(200),
-> body TEXT,
-> FULLTEXT (title,body)
-> );
Query OK, 0 rows affected (0.00 sec)
mysql> INSERT INTO articles VALUES
-> (NULL,'MySQL Tutorial', 'DBMS stands for DataBase ...'),
-> (NULL,'How To Use MySQL Efficiently', 'After you went through a ...'),
-> (NULL,'Optimising MySQL','In this tutorial we will show ...'),
-> (NULL,'1001 MySQL Tricks','1. Never run mysqld as root. 2. ...'),
-> (NULL,'MySQL vs. YourSQL', 'In the following database comparison ...'),
-> (NULL,'MySQL Security', 'When configured properly, MySQL ...');
Query OK, 6 rows affected (0.00 sec)
Records: 6 Duplicates: 0 Warnings: 0
mysql> SELECT * FROM articles
-> WHERE MATCH (title,body) AGAINST ('database');
+----+-------------------+------------------------------------------+
| id | title | body |
+----+-------------------+------------------------------------------+
| 5 | MySQL vs. YourSQL | In the following database comparison ... |
| 1 | MySQL Tutorial | DBMS stands for DataBase ... |
+----+-------------------+------------------------------------------+
2 rows in set (0.00 sec)
The MATCH() function performs a natural language search for a string
against a text collection (a set of one or more columns included in
a FULLTEXT index). The search string is given as the argument to
AGAINST(). The search is performed in case-insensitive fashion.
For every row in the table, MATCH() returns a relevance value,
that is, a similarity measure between the search string and the text in
that row in the columns named in the MATCH() list.
When MATCH() is used in a WHERE clause (see example above)
the rows returned are automatically sorted with highest relevance first.
Relevance values are non-negative floating-point numbers. Zero relevance
means no similarity. Relevance is computed based on the number of words
in the row, the number of unique words in that row, the total number of
words in the collection, and the number of documents (rows) that contain
a particular word.
It is also possible to perform a boolean mode search. This is explained later in the section.
The preceding example is a basic illustration showing how to use the
MATCH() function. Rows are returned in order of decreasing
relevance.
The next example shows how to retrieve the relevance values explicitly.
As neither WHERE nor ORDER BY clauses are present, returned
rows are not ordered.
mysql> SELECT id,MATCH (title,body) AGAINST ('Tutorial') FROM articles;
+----+-----------------------------------------+
| id | MATCH (title,body) AGAINST ('Tutorial') |
+----+-----------------------------------------+
| 1 | 0.64840710366884 |
| 2 | 0 |
| 3 | 0.66266459031789 |
| 4 | 0 |
| 5 | 0 |
| 6 | 0 |
+----+-----------------------------------------+
6 rows in set (0.00 sec)
The following example is more complex. The query returns the relevance
and still sorts the rows in order of decreasing relevance. To achieve
this result, you should specify MATCH() twice. This will cause no
additional overhead, because the MySQL optimiser will notice that the
two MATCH() calls are identical and invoke the full-text search
code only once.
mysql> SELECT id, body, MATCH (title,body) AGAINST
-> ('Security implications of running MySQL as root') AS score
-> FROM articles WHERE MATCH (title,body) AGAINST
-> ('Security implications of running MySQL as root');
+----+-------------------------------------+-----------------+
| id | body | score |
+----+-------------------------------------+-----------------+
| 4 | 1. Never run mysqld as root. 2. ... | 1.5055546709332 |
| 6 | When configured properly, MySQL ... | 1.31140957288 |
+----+-------------------------------------+-----------------+
2 rows in set (0.00 sec)
MySQL uses a very simple parser to split text into words. A ``word'' is any sequence of characters consisting of letters, digits, `'', and `_'. Any ``word'' that is present in the stopword list or is just too short (3 characters or less) is ignored.
Every correct word in the collection and in the query is weighted according to its significance in the query or collection. This way, a word that is present in many documents will have lower weight (and may even have a zero weight), because it has lower semantic value in this particular collection. Otherwise, if the word is rare, it will receive a higher weight. The weights of the words are then combined to compute the relevance of the row.
Such a technique works best with large collections (in fact, it was carefully tuned this way). For very small tables, word distribution does not reflect adequately their semantic value, and this model may sometimes produce bizarre results.
mysql> SELECT * FROM articles WHERE MATCH (title,body) AGAINST ('MySQL');
Empty set (0.00 sec)
The search for the word MySQL produces no results in the above
example, because that word is present in more than half the rows. As such,
it is effectively treated as a stopword (that is, a word with zero semantic
value). This is the most desirable behaviour -- a natural language query
should not return every second row from a 1 GB table.
A word that matches half of rows in a table is less likely to locate relevant documents. In fact, it will most likely find plenty of irrelevant documents. We all know this happens far too often when we are trying to find something on the Internet with a search engine. It is with this reasoning that such rows have been assigned a low semantic value in this particular dataset.
As of Version 4.0.1, MySQL can also perform boolean full-text searches using
the IN BOOLEAN MODE modifier.
mysql> SELECT * FROM articles WHERE MATCH (title,body)
-> AGAINST ('+MySQL -YourSQL' IN BOOLEAN MODE);
+----+------------------------------+-------------------------------------+
| id | title | body |
+----+------------------------------+-------------------------------------+
| 1 | MySQL Tutorial | DBMS stands for DataBase ... |
| 2 | How To Use MySQL Efficiently | After you went through a ... |
| 3 | Optimising MySQL | In this tutorial we will show ... |
| 4 | 1001 MySQL Tricks | 1. Never run mysqld as root. 2. ... |
| 6 | MySQL Security | When configured properly, MySQL ... |
+----+------------------------------+-------------------------------------+
This query retrieved all the rows that contain the word MySQL
(note: the 50% threshold is not used), but that do not contain
the word YourSQL. Note that a boolean mode search does not
automatically sort rows in order of decreasing relevance. You can
see this from result of the preceding query, where the row with the
highest relevance (the one that contains MySQL twice) is listed
last, not first. A boolean full-text search can also work even without
a FULLTEXT index, although it would be slow.
The boolean full-text search capability supports the following operators:
+
-
MATCH() ... AGAINST() without the IN BOOLEAN
MODE modifier.
< >
< operator
decreases the contribution and the > operator increases it.
See the example below.
( )
~
- operator.
*
"
", matches only
rows that contain this phrase literally, as it was typed.
And here are some examples:
apple banana
+apple +juice
+apple macintosh
+apple -macintosh
+apple +(>pie <strudel)
apple*
"some words"
MATCH() function must be columns from the
same table that is part of the same FULLTEXT index, unless the
MATCH() is IN BOOLEAN MODE.
MATCH() column list must exactly match the column list in some
FULLTEXT index definition for the table, unless this MATCH()
is IN BOOLEAN MODE.
AGAINST() must be a constant string.
Unfortunately, full-text search has few user-tunable parameters yet, although adding some is very high on the TODO. If you have a MySQL source distribution (see section 2.3 Installing a MySQL Source Distribution), you can exert more control over full-text searching behaviour.
Note that full-text search was carefully tuned for the best searching effectiveness. Modifying the default behaviour will, in most cases, only make the search results worse. Do not alter the MySQL sources unless you know what you are doing!
ft_min_word_len.
See section 4.5.7.4 SHOW VARIABLES.
Change it to the value you prefer, and rebuild your FULLTEXT indexes.
(This variable is only available from MySQL version 4.0.)
ft_stopword_file variable.
See section 4.5.7.4 SHOW VARIABLES.
Rebuild your FULLTEXT indexes after modifying the stopword list.
(This variable is only available from MySQL version 4.0.10 and onwards)
#define GWS_IN_USE GWS_PROBTo:
#define GWS_IN_USE GWS_FREQThen recompile MySQL. There is no need to rebuild the indexes in this case. Note: by doing this you severely decrease MySQL's ability to provide adequate relevance values for the
MATCH() function.
If you really need to search for such common words, it would be better to
search using IN BOOLEAN MODE instead, which does not observe the 50%
threshold.
ft_boolean_syntax variable.
See section 4.5.7.4 SHOW VARIABLES.
Still, this variable is read-only, its value is set in
`myisam/ft_static.c'.
For those changes that require you to rebuild your FULLTEXT indexes,
the easiest way to do so for a MyISAM table is to use the following
statement, which rebuilds the index file:
mysql> REPAIR TABLE tbl_name QUICK;
FULLTEXT index faster.
MERGE tables.
FULLTEXT in CREATE/ALTER TABLE).
From version 4.0.1, MySQL server features a Query Cache.
When in use, the query cache stores the text of a SELECT query
together with the corresponding result that was sent to the client.
If an identical query is later received, the server will retrieve
the results from the query cache rather than parsing and executing the
same query again.
NOTE: The query cache does not return stale data. When data is modified, any relevant entries in the query cache are flushed.
The query cache is extremely useful in an environment where (some) tables don't change very often and you have a lot of identical queries. This is a typical situation for many web servers that use a lot of dynamic content.
Below is some performance data for the query cache. (These results were generated by running the MySQL benchmark suite on a Linux Alpha 2 x 500 MHz with 2 GB RAM and a 64 MB query cache):
query_cache_size=0.
By disabling the query cache code there is no noticeable overhead.
(query cache can be excluded from code with help of configure option
--without-query-cache)
Queries are compared before parsing, thus
SELECT * FROM tbl_name
and
Select * from tbl_name
are regarded as different queries for query cache, so queries need to be exactly the same (byte for byte) to be seen as identical. In addition, a query may be seen as different if for instance one client is using a new communication protocol format or another character set than another client.
Queries that uses different databases, uses different protocol versions or the uses different default character sets are considered different queries and cached separately.
The cache does work for SELECT SQL_CALC_FOUND_ROWS ... and
SELECT FOUND_ROWS() ... type queries because the number of
found rows is also stored in the cache.
If query result was returned from query cache then status variable
Com_select will not be increased, but Qcache_hits will be.
See section 6.9.4 Query Cache Status and Maintenance.
If a table changes (INSERT, UPDATE, DELETE,
TRUNCATE, ALTER or DROP TABLE|DATABASE),
then all cached queries that used this table (possibly through a
MRG_MyISAM table!) become invalid and are removed from the cache.
Transactional InnoDB tables that have been changed will be invalidated
when a COMMIT is performed.
In MySQL 4.0, the query cache is disabled inside of transactions (it does
not return results). Beginning with MySQL 4.1.1, the query cache will also
work inside of transactions when using InnoDB tables (it will use the
table version number to detect if the data is still current or not).
A query cannot be cached if it contains one of the functions:
| Function | Function | Function |
User-Defined Functions
| CONNECTION_ID
| FOUND_ROWS
|
GET_LOCK
| RELEASE_LOCK
| LOAD_FILE
|
MASTER_POS_WAIT
| NOW
| SYSDATE
|
CURRENT_TIMESTAMP
| CURDATE
| CURRENT_DATE
|
CURTIME
| CURRENT_TIME
| DATABASE
|
ENCRYPT (with one parameter)
| LAST_INSERT_ID
| RAND
|
UNIX_TIMESTAMP (without parameters)
| USER
| BENCHMARK
|
Nor can a query be cached if it contains user variables,
references the mysql system database,
is of the form SELECT ... IN SHARE MODE,
SELECT ... INTO OUTFILE ...,
SELECT ... INTO DUMPFILE ... or
of the form SELECT * FROM AUTOINCREMENT_FIELD IS NULL
(to retrieve last insert id - ODBC work around).
However, FOUND_ROWS() will return the correct value,
even if the preceding query was fetched from the cache.
In case a query does not use any tables, or uses temporary tables, or if the user has a column privilege for any of the involved tables, that query will not be cached.
Before a query is fetched from the query cache, MySQL will check that the user has SELECT privilege to all the involved databases and tables. If this is not the case, the cached result will not be used.
The query cache adds a few MySQL system variables for
mysqld which may be set in a configuration file, on the
command-line when starting mysqld.
query_cache_limit
Don't cache results that are bigger than this. (Default 1M).
query_cache_min_res_unit
This variable is present from version 4.1.
The result of a query (the data that is also sent to the client) is stored
in the query cache during result retrieval. Therefore the data is usually
not handled in one big chunk. The query cache allocates blocks for storing
this data on demand, so when one block is filled, a new block is allocated.
Because memory allocation operation is costly (time wise), the query cache
allocates blocks with a minimum size of query_cache_min_res_unit.
When a query is executed, the last result block is trimmed to the actual
data size, so that unused memory is freed.
query_cache_min_res_unit is 4 KB which should
be adequate for most cases.
Qcache_free_blocks), which can cause the query cache to have to
delete queries from the cache due to lack of memory
(Qcache_lowmem_prunes)). In this case you should decrease
query_cache_min_res_unit.
Qcache_total_blocks
and Qcache_queries_in_cache), you can increase performance by
increasing query_cache_min_res_unit. However, be careful to not
make it to large (see the previous point).
query_cache_size
The amount of memory (specified in bytes) allocated to store results from
old queries. If this is 0, the query cache is disabled (default).
query_cache_type
This may be set (only numeric) to
| Option | Description |
| 0 | (OFF, don't cache or retrieve results) |
| 1 | (ON, cache all results except SELECT SQL_NO_CACHE ... queries)
|
| 2 | (DEMAND, cache only SELECT SQL_CACHE ... queries)
|
Inside a thread (connection), the behaviour of the query cache can be changed from the default. The syntax is as follows:
QUERY_CACHE_TYPE = OFF | ON | DEMAND
QUERY_CACHE_TYPE = 0 | 1 | 2
| Option | Description |
| 0 or OFF | Don't cache or retrieve results. |
| 1 or ON | Cache all results except SELECT SQL_NO_CACHE ... queries.
|
| 2 or DEMAND | Cache only SELECT SQL_CACHE ... queries.
|
SELECT
There are two possible query cache related parameters that may be
specified in a SELECT query:
| Option | Description |
SQL_CACHE
| If QUERY_CACHE_TYPE is DEMAND, allow the query to be cached.
If QUERY_CACHE_TYPE is ON, this is the default.
If QUERY_CACHE_TYPE is OFF, do nothing.
|
SQL_NO_CACHE
| Make this query non-cachable, don't allow this query to be stored in the cache. |
With the FLUSH QUERY CACHE command you can defragment the query
cache to better utilise its memory. This command will not remove any
queries from the cache.
FLUSH TABLES also flushes the query cache.
The RESET QUERY CACHE command removes all query results from the
query cache.
You can check whether the query cache is present in your MySQL version:
mysql> SHOW VARIABLES LIKE 'have_query_cache'; +------------------+-------+ | Variable_name | Value | +------------------+-------+ | have_query_cache | YES | +------------------+-------+ 1 row in set (0.00 sec)
You can monitor query cache performance in SHOW STATUS:
| Variable | Description |
Qcache_queries_in_cache
| Number of queries registered in the cache. |
Qcache_inserts
| Number of queries added to the cache. |
Qcache_hits
| Number of cache hits. |
Qcache_lowmem_prunes
| Number of queries that were deleted from cache because of low memory. |
Qcache_not_cached
| Number of non-cached queries
(not cachable, or due to QUERY_CACHE_TYPE).
|
Qcache_free_memory
| Amount of free memory for query cache. |
Qcache_free_blocks
| Number of free memory blocks in query cache. |
Qcache_total_blocks
| Total number of blocks in query cache. |
Total number of queries =
Qcache_inserts + Qcache_hits + Qcache_not_cached.
The query cache uses variable length blocks, so Qcache_total_blocks
and Qcache_free_blocks may indicate query cache memory fragmentation.
After FLUSH QUERY CACHE only a single (big) free block remains.
Note: Every query needs a minimum of two blocks (one for the query text and one or more for the query results). Also, every table that is used by a query needs one block, but if two or more queries use same table only one block needs to be allocated.
You can use the Qcache_lowmem_prunes status variable to tune the query
cache size. It counts the number of queries that have been removed from the
cache to free up memory for caching new queries. The query cache uses a
least recently used (LRU) strategy to decide which queries to
remove from the cache.
As of MySQL Version 3.23.6, you can choose between three basic
table formats (ISAM, HEAP and MyISAM). Newer
versions of MySQL support additional table types (InnoDB,
or BDB), depending on how you compile it.
When you create a new table, you can tell MySQL what type of table to create.
The default table type is usually MyISAM.
MySQL will always create a `.frm' file to hold the table and column definitions. The table's index and data will be stored in one or more other files, depending on the table type.
If you try to use a table type that is not compiled-in or activated,
MySQL will instead create a table of type MyISAM. This behaviour
is convenient when you want to copy tables between MySQL servers that
support different table types. (Perhaps your master server supports
transactional storage engines for increased safety, while the slave servers use
only non-transactional storage engines for greater speed.)
This automatic change of table types can be confusing for new MySQL users. We plan to fix this by introducing warnings in the new client-server protocol in version 4.1 and generating a warning when a table type is automatically changed.
You can convert tables between different types with the ALTER
TABLE statement. See section 6.5.4 ALTER TABLE Syntax.
Note that MySQL supports two different kinds of
tables: transaction-safe tables (InnoDB and BDB)
and not transaction-safe tables (HEAP, ISAM,
MERGE, and MyISAM).
Advantages of transaction-safe tables (TST):
COMMIT command.
ROLLBACK to ignore your changes (if you are not
running in auto-commit mode).
Note that to use InnoDB tables you have to use at least the
innodb_data_file_path startup option. See section 7.5.3 InnoDB Startup Options.
Advantages of not transaction-safe tables (NTST):
You can combine TST and NTST tables in the same statements to get the best of both worlds.
MyISAM Tables
MyISAM is the default table type in MySQL Version 3.23. It's
based on the ISAM code and has a lot of useful extensions.
The index is stored in a file with the `.MYI' (MYIndex) extension,
and the data is stored in a file with the `.MYD' (MYData) extension.
You can check/repair MyISAM tables with the myisamchk
utility. See section 4.4.6.7 Using myisamchk for Crash Recovery. You can compress MyISAM tables with
myisampack to take up much less space.
See section 4.7.4 myisampack, The MySQL Compressed Read-only Table Generator.
The following is new in MyISAM:
MyISAM file that indicates whether
the table was closed correctly. If mysqld is started with
--myisam-recover, MyISAM tables will automatically be
checked and/or repaired on open if the table wasn't closed properly.
INSERT new rows in a table that doesn't have free blocks
in the middle of the datafile, at the same time other threads are
reading from the table (concurrent insert). An free block can come from
an update of a dynamic length row with much data to a row with less data
or when deleting rows. When all free blocks are used up, all future
inserts will be concurrent again.
AUTO_INCREMENT column. MyISAM
will automatically update this on INSERT/UPDATE. The
AUTO_INCREMENT value can be reset with myisamchk. This
will make AUTO_INCREMENT columns faster (at least 10%) and old
numbers will not be reused as with the old ISAM. Note that when an
AUTO_INCREMENT is defined on the end of a multi-part-key the old
behaviour is still present.
AUTO_INCREMENT
column) the key tree will be split so that the high node only contains one
key. This will improve the space utilisation in the key tree.
BLOB and TEXT columns can be indexed.
NULL values are allowed in indexed columns. This takes 0-1
bytes/key.
myisamchk.
myisamchk will mark tables as checked if one runs it with
--update-state. myisamchk --fast will only check those
tables that don't have this mark.
myisamchk -a stores statistics for key parts (and not only for
whole keys as in ISAM).
myisampack can pack BLOB and VARCHAR columns.
DATA/INDEX DIRECTORY="path" option to
CREATE TABLE). See section 6.5.3 CREATE TABLE Syntax.
MyISAM also supports the following things, which MySQL
will be able to use in the near future:
VARCHAR type; a VARCHAR column starts
with a length stored in 2 bytes.
VARCHAR may have fixed or dynamic record length.
VARCHAR and CHAR may be up to 64K.
All key segments have their own language definition. This will enable
MySQL to have different language definitions per column.
UNIQUE. This will allow
you to have UNIQUE on any combination of columns in a table. (You
can't search on a UNIQUE computed index, however.)
Note that index files are usually much smaller with MyISAM than with
ISAM. This means that MyISAM will normally use less
system resources than ISAM, but will need more CPU time when inserting
data into a compressed index.
The following options to mysqld can be used to change the behaviour of
MyISAM tables. See section 4.5.7.4 SHOW VARIABLES.
| Option | Description |
--myisam-recover=# | Automatic recovery of crashed tables. |
-O myisam_sort_buffer_size=# | Buffer used when recovering tables. |
--delay-key-write=ALL | Don't flush key buffers between writes for any MyISAM table |
-O myisam_max_extra_sort_file_size=# | Used to help MySQL to decide when to use the slow but safe key cache index create method. Note that this parameter is given in megabytes before 4.0.3 and in bytes beginning with this version. |
-O myisam_max_sort_file_size=# | Don't use the fast sort index method to created index if the temporary file would get bigger than this. Note that this parameter is given in megabytes before 4.0.3 and in bytes beginning with this version. |
-O bulk_insert_buffer_size=# | Size of tree cache used in bulk insert optimisation. Note that this is a limit per thread! |
The automatic recovery is activated if you start mysqld with
--myisam-recover=#. See section 4.1.1 mysqld Command-line Options.
On open, the table is checked if it's marked as crashed or if the open
count variable for the table is not 0 and you are running with
--skip-external-locking. If either of the above is true the following
happens.
If the recover wouldn't be able to recover all rows from a previous
completed statement and you didn't specify FORCE as an option to
myisam-recover, then the automatic repair will abort with an error
message in the error file:
Error: Couldn't repair table: test.g00pages
If you in this case had used the FORCE option you would instead have got
a warning in the error file:
Warning: Found 344 of 354 rows when repairing ./test/g00pages
Note that if you run automatic recover with the BACKUP option,
you should have a cron script that automatically moves file with names
like `tablename-datetime.BAK' from the database directories to a
backup media.
See section 4.1.1 mysqld Command-line Options.
MySQL can support different index types, but the normal type is
ISAM or MyISAM. These use a B-tree index, and you can roughly calculate
the size for the index file as (key_length+4)/0.67, summed over
all keys. (This is for the worst case when all keys are inserted in
sorted order and we don't have any compressed keys.)
String indexes are space compressed. If the first index part is a
string, it will also be prefix compressed. Space compression makes the
index file smaller than the above figures if the string column has a lot
of trailing space or is a VARCHAR column that is not always used
to the full length. Prefix compression is used on keys that start
with a string. Prefix compression helps if there are many strings
with an identical prefix.
In MyISAM tables, you can also prefix compress numbers by specifying
PACK_KEYS=1 when you create the table. This helps when you have
many integer keys that have an identical prefix when the numbers are stored
high-byte first.
MyISAM Table Formats
MyISAM supports 3 different table types. Two of them are chosen
automatically depending on the type of columns you are using. The third,
compressed tables, can only be created with the myisampack tool.
When you CREATE or ALTER a table you can for tables that
doesn't have BLOBs force the table format to DYNAMIC or
FIXED with the ROW_FORMAT=# table option. In the future
you will be able to compress/decompress tables by specifying
ROW_FORMAT=compressed | default to ALTER TABLE.
See section 6.5.3 CREATE TABLE Syntax.
This is the default format. It's used when the table contains no
VARCHAR, BLOB, or TEXT columns.
This format is the simplest and most secure format. It is also the fastest of the on-disk formats. The speed comes from the easy way data can be found on disk. When looking up something with an index and static format it is very simple. Just multiply the row number by the row length.
Also, when scanning a table it is very easy to read a constant number of records with each disk read.
The security is evidenced if your computer crashes when writing to a
fixed-size MyISAM file, in which case myisamchk can easily figure out where each
row starts and ends. So it can usually reclaim all records except the
partially written one. Note that in MySQL all indexes can always be
reconstructed:
CHAR, NUMERIC, and DECIMAL columns are space-padded
to the column width.
myisamchk) unless a huge number of
records are deleted and you want to return free disk space to the operating
system.
This format is used if the table contains any VARCHAR, BLOB,
or TEXT columns or if the table was created with
ROW_FORMAT=dynamic.
This format is a little more complex because each row has to have a header that says how long it is. One record can also end up at more than one location when it is made longer at an update.
You can use OPTIMIZE table or myisamchk to defragment a
table. If you have static data that you access/change a lot in the same
table as some VARCHAR or BLOB columns, it might be a good
idea to move the dynamic columns to other tables just to avoid
fragmentation:
'') for string columns, or zero for numeric columns. (This isn't
the same as columns containing NULL values.) If a string column
has a length of zero after removal of trailing spaces, or a numeric
column has a value of zero, it is marked in the bit map and not saved to
disk. Non-empty strings are saved as a length byte plus the string
contents.
myisamchk
-r from time to time to get better performance. Use myisamchk -ei
tbl_name for some statistics.
3 + (number of columns + 7) / 8 + (number of char columns) + packed size of numeric columns + length of strings + (number of NULL columns + 7) / 8There is a penalty of 6 bytes for each link. A dynamic record is linked whenever an update causes an enlargement of the record. Each new link will be at least 20 bytes, so the next enlargement will probably go in the same link. If not, there will be another link. You may check how many links there are with
myisamchk -ed. All links may be removed with myisamchk -r.
This is a read-only type that is generated with the optional
myisampack tool (pack_isam for ISAM tables):
GPL, can read tables that were compressed with myisampack.
0 are stored using 1 bit.
BIGINT column (8 bytes) may
be stored as a TINYINT column (1 byte) if all values are in the range
0 to 255.
ENUM.
myisamchk.
MyISAM Table ProblemsThe file format that MySQL uses to store data has been extensively tested, but there are always circumstances that may cause database tables to become corrupted.
MyISAM TablesEven if the MyISAM table format is very reliable (all changes to a table is written before the SQL statements returns) , you can still get corrupted tables if some of the following things happens:
mysqld process being killed in the middle of a write.
Typial typical symptoms for a corrupt table is:
Incorrect key file for table: '...'. Try to repair it
while selecting data from the table.
You can check if a table is ok with the command CHECK
TABLE. See section 4.4.4 CHECK TABLE Syntax.
You can repair a corrupted table with REPAIR TABLE. See section 4.4.5 REPAIR TABLE Syntax.
You can also repair a table, when mysqld is not running with
the myisamchk command. myisamchk syntax.
If your tables get corrupted a lot you should try to find the reason for this! See section A.4.1 What To Do If MySQL Keeps Crashing.
In this case the most important thing to know is if the table got
corrupted if the mysqld died (one can easily verify this by
checking if there is a recent row restarted mysqld in the mysqld
error file). If this isn't the case, then you should try to make a test
case of this. See section E.1.6 Making a Test Case If You Experience Table Corruption.
Each MyISAM `.MYI' file has in the header a counter that can
be used to check if a table has been closed properly.
If you get the following warning from CHECK TABLE or myisamchk:
# clients is using or hasn't closed the table properly
this means that this counter has come out of sync. This doesn't mean that the table is corrupted, but means that you should at least do a check on the table to verify that it's okay.
The counter works as follows:
FLUSH or
because there isn't room in the table cache) the counter is
decremented if the table has been updated at any point.
In other words, the only ways this can go out of sync are:
MyISAM tables are copied without a LOCK and
FLUSH TABLES.
myisamchk --recover or myisamchk
--update-stateon a table that was in use by mysqld.
mysqld servers are using the table and one has done a
REPAIR or CHECK of the table while it was in use by
another server. In this setup the CHECK is safe to do (even if
you will get the warning from other servers), but REPAIR should
be avoided as it currently replaces the datafile with a new one, which
is not signaled to the other servers.
MERGE Tables
MERGE tables are new in MySQL Version 3.23.25. The code
is still in gamma, but should be reasonable stable.
A MERGE table (also known as a MRG_MyISAM table) is a
collection of identical MyISAM tables that can be used as one.
You can only SELECT, DELETE, and UPDATE from the
collection of tables. If you DROP the MERGE table, you
are only dropping the MERGE specification.
Note that DELETE FROM merge_table used without a WHERE
will only clear the mapping for the table, not delete everything in the
mapped tables. (We plan to fix this in 4.1).
With identical tables we mean that all tables are created with identical
column and key information. You can't merge tables in which the
columns are packed differently, doesn't have exactly the same columns,
or have the keys in different order. However, some of the tables can be
compressed with myisampack. See section 4.7.4 myisampack, The MySQL Compressed Read-only Table Generator.
When you create a MERGE table, you will get a `.frm' table
definition file and a `.MRG' table list file. The `.MRG' just
contains a list of the index files (`.MYI' files) that should
be used as one. All used tables must be in the same database as the
MERGE table itself.
For the moment, you need to have SELECT, UPDATE, and
DELETE privileges on the tables you map to a MERGE table.
MERGE tables can help you solve the following problems:
myisampack, and then create a MERGE to use these as one.
MERGE table on this could be much faster than using
the big table. (You can, of course, also use a RAID to get the same
kind of benefits.)
MERGE table for others. You can even have many
different MERGE tables active, with possible overlapping files.
MERGE file than trying to repair a really big file.
MERGE table uses the
index of the individual tables. It doesn't need to maintain an index of
its one. This makes MERGE table collections VERY fast to make or
remap. Note that you must specify the key definitions when you create
a MERGE table!.
MERGE table on them on demand.
This is much faster and will save a lot of disk space.
MERGE
over one table. There shouldn't be any really notable performance
impacts of doing this (only a couple of indirect calls and memcpy()
calls for each read).
The disadvantages with MERGE tables are:
MyISAM tables for a MERGE table.
REPLACE doesn't work.
MERGE tables uses more file descriptors. If you are using a
MERGE table that maps over 10 tables and 10 users are using this, you
are using 10*10 + 10 file descriptors. (10 datafiles for 10 users
and 10 shared index files.)
MERGE
storage engine will need to issue a read on all underlying tables to check
which one most closely matches the given key. If you then do a "read-next"
then the MERGE storage engine will need to search the read buffers
to find the next key. Only when one key buffer is used up, the storage engine
will need to read the next key block. This makes MERGE keys much slower
on eq_ref searches, but not much slower on ref searches.
See section 5.2.1 EXPLAIN Syntax (Get Information About a SELECT).
DROP TABLE,
ALTER TABLE,
DELETE FROM table_name without a WHERE clause,
REPAIR TABLE,
TRUNCATE TABLE,
OPTIMIZE TABLE, or
ANALYZE TABLE
on any of the table that is
mapped by a MERGE table that is "open". If you do this, the
MERGE table may still refer to the original table and you will
get unexpected results. The easiest way to get around this deficiency
is to issue the FLUSH TABLES command, ensuring no MERGE
tables remain "open".
When you create a MERGE table you have to specify with
UNION(list-of-tables) which tables you want to use as
one. Optionally you can specify with INSERT_METHOD if you want
insert for the MERGE table to happen in the first or last table
in the UNION list. If you don't specify INSERT_METHOD or
specify NO, then all INSERT commands on the MERGE
table will return an error.
The following example shows you how to use MERGE tables:
CREATE TABLE t1 (a INT AUTO_INCREMENT PRIMARY KEY, message CHAR(20));
CREATE TABLE t2 (a INT AUTO_INCREMENT PRIMARY KEY, message CHAR(20));
INSERT INTO t1 (message) VALUES ("Testing"),("table"),("t1");
INSERT INTO t2 (message) VALUES ("Testing"),("table"),("t2");
CREATE TABLE total (a INT AUTO_INCREMENT PRIMARY KEY, message CHAR(20))
TYPE=MERGE UNION=(t1,t2) INSERT_METHOD=LAST;
Note that you can also manipulate the `.MRG' file directly from the outside of the MySQL server:
shell> cd /mysql-data-directory/current-database shell> ls -1 t1.MYI t2.MYI > total.MRG shell> mysqladmin flush-tables
Now you can do things like:
mysql> SELECT * FROM total; +---+---------+ | a | message | +---+---------+ | 1 | Testing | | 2 | table | | 3 | t1 | | 1 | Testing | | 2 | table | | 3 | t2 | +---+---------+
Note that the a column, though declared as PRIMARY KEY,
is not really unique, as MERGE table cannot enforce uniqueness
over a set of underlying MyISAM tables.
To remap a MERGE table you can do one of the following:
DROP the table and re-create it
ALTER TABLE table_name UNION(...)
FLUSH TABLE on the
MERGE table and all underlying tables to force the storage engine to
read the new definition file.
MERGE Table Problems
The following are the known problems with MERGE tables:
MERGE table cannot maintain UNIQUE constraints over the
whole table. When you do INSERT, the data goes into the first or
last table (according to INSERT_METHOD=xxx) and this MyISAM
table ensures that the data are unique, but it knows nothing about
others MyISAM tables.
DELETE FROM merge_table used without a WHERE
will only clear the mapping for the table, not delete everything in the
mapped tables.
RENAME TABLE on a table used in an active MERGE table may
corrupt the table. This will be fixed in MySQL 4.0.x.
MERGE doesn't check if the underlying
tables are of compatible types. If you use MERGE tables in this
fashion, you are very likely to run into strange problems.
ALTER TABLE to first add an UNIQUE index to a
table used in a MERGE table and then use ALTER TABLE to
add a normal index on the MERGE table, the key order will be
different for the tables if there was an old non-unique key in the
table. This is because ALTER TABLE puts UNIQUE keys before
normal keys to be able to detect duplicate keys as early as possible.
MERGE table efficiently and may
sometimes produce non-optimal joins. This will be fixed in MySQL 4.0.x.
DROP TABLE on a table that is in use by a MERGE table will
not work on Windows because the MERGE storage engine does the table mapping
hidden from the upper layer of MySQL. Because Windows doesn't allow you
to drop files that are open, you first must flush all MERGE
tables (with FLUSH TABLES) or drop the MERGE table before
dropping the table. We will fix this at the same time we introduce
VIEWs.
ISAM Tables
The deprecated ISAM table type will disappear in MySQL version 5.0.
MyISAM is a better implementation of the same thing.
ISAM uses a B-tree index. The index is stored in a file
with the `.ISM' extension, and the data is stored in a file with
the `.ISD' extension.
You can check/repair ISAM tables with the isamchk utility.
See section 4.4.6.7 Using myisamchk for Crash Recovery.
ISAM has the following features/properties:
Most of the things true for MyISAM tables are also true for ISAM
tables. See section 7.1 MyISAM Tables. The major differences compared
to MyISAM tables are:
ISAM tables are not binary portable across OS/Platforms.
pack_isam rather than with myisampack.
If you want to convert an ISAM table to a MyISAM table so
that you can use utilities such as mysqlcheck, use an ALTER
TABLE statement:
mysql> ALTER TABLE tbl_name TYPE = MYISAM;
The embedded MySQL versions doesn't support ISAM tables.
HEAP Tables
HEAP tables use hashed indexes and are stored in memory. This
makes them very fast, but if MySQL crashes you will lose all
data stored in them. HEAP is very useful for temporary tables!
The MySQL internal HEAP tables use 100% dynamic hashing
without overflow areas. There is no extra space needed for free lists.
HEAP tables also don't have problems with delete + inserts, which
normally is common with hashed tables:
mysql> CREATE TABLE test TYPE=HEAP SELECT ip,SUM(downloads) AS down
-> FROM log_table GROUP BY ip;
mysql> SELECT COUNT(ip),AVG(down) FROM test;
mysql> DROP TABLE test;
Here are some things you should consider when you use HEAP tables:
MAX_ROWS in the CREATE statement
to ensure that you accidentally do not use all memory.
= and <=> (but are VERY fast).
HEAP tables can only use whole keys to search for a row; compare this
to MyISAM tables where any prefix of the key can be used to find rows.
HEAP tables use a fixed record length format.
HEAP doesn't support BLOB/TEXT columns.
HEAP doesn't support AUTO_INCREMENT columns.
HEAP doesn't support an index on a NULL
column.
HEAP table (this isn't common for
hashed tables).
HEAP tables are shared between all clients (just like any other
table).
ORDER BY).
HEAP tables are allocated in small blocks. The tables
are 100% dynamic (on inserting). No overflow areas and no extra key
space are needed. Deleted rows are put in a linked list and are
reused when you insert new data into the table.
HEAP tables that you want to use at
the same time.
DELETE FROM heap_table,
TRUNCATE heap_table or DROP TABLE heap_table.
MyISAM
table to a HEAP table.
HEAP tables bigger than max_heap_table_size.
The memory needed for one row in a HEAP table is:
SUM_OVER_ALL_KEYS(max_length_of_key + sizeof(char*) * 2) + ALIGN(length_of_row+1, sizeof(char*))
sizeof(char*) is 4 on 32-bit machines and 8 on 64-bit machines.
InnoDB Tables
InnoDB provides MySQL with a transaction-safe (ACID compliant)
storage engine with commit, rollback, and crash recovery capabilities.
InnoDB does locking on row level and also provides an Oracle-style
consistent
non-locking read in SELECTs. These features increase
multiuser concurrency and performance. There is no need for
lock escalation in InnoDB,
because row level locks in InnoDB fit in very small space.
InnoDB tables support FOREIGN KEY constraints
as the first table type in MySQL.
InnoDB has been designed for maximum performance when processing large data volumes. Its CPU efficiency is probably not matched by any other disk-based relational database engine.
InnoDB is used in production at numerous large database sites requiring high performance. The famous Internet news site Slashdot.org runs on InnoDB. Mytrix, Inc. stores over 1 TB of data in InnoDB, and another site handles an average load of 800 inserts/updates per second in InnoDB.
Technically, InnoDB is a complete database backend placed under MySQL. InnoDB has its own buffer pool for caching data and indexes in main memory. InnoDB stores its tables and indexes in a tablespace, which may consist of several files (or raw disk partitions). This is different from, for example, MyISAM tables where each table is stored as a separate file. InnoDB tables can be of any size also on those operating systems where file-size is limited to 2 GB.
You can find the latest information about InnoDB at http://www.innodb.com/. The most up-to-date version of the InnoDB manual is always placed there, and you can also order commercial licenses and support for InnoDB.
In the source distribution of MySQL, InnoDB appears as a subdirectory.
InnoDB is distributed under the GNU GPL License Version 2 (of June 1991).
From MySQL version 4.0, InnoDB is enabled by default.
The following information only applies to the 3.23 series.
InnoDB tables are included in the MySQL source distribution starting from 3.23.34a and are activated in the MySQL -Max binary of the 3.23 series. For Windows the -Max binaries are contained in the standard distribution.
If you have downloaded a binary version of MySQL that includes
support for InnoDB, simply follow the instructions of the
MySQL manual
for installing a binary version of MySQL. If you already have
MySQL-3.23 installed, then the simplest way to install
MySQL -Max is to replace the server executable `mysqld'
with the corresponding executable in the -Max distribution.
MySQL and MySQL -Max differ only in the server executable.
See section 2.2.11 Installing a MySQL Binary Distribution.
See section 4.7.5 mysqld-max, An Extended mysqld Server.
To compile MySQL with InnoDB support,
download MySQL-3.23.34a or newer version from
http://www.mysql.com/
and configure MySQL with the
--with-innodb option. See the
MySQL manual
about installing a MySQL source distribution.
See section 2.3 Installing a MySQL Source Distribution.
cd /path/to/source/of/mysql-3.23.37 ./configure --with-innodb
To use InnoDB tables in MySQL-Max-3.23 you must specify
configuration parameters in the [mysqld] section of the
configuration file `my.cnf', or on Windows optionally in
`my.ini'.
At the minimum, in 3.23 you must specify innodb_data_file_path
where you specify the names and the sizes of datafiles. If you do
not mention innodb_data_home_dir in `my.cnf' the default
is to create these files to the datadir of MySQL.
If you specify innodb_data_home_dir as an empty string,
then you can give absolute paths to your data files in
innodb_data_file_path.
The minimal way
to modify it is to add to the [mysqld] section the line
innodb_data_file_path=ibdata:30M
but to get good performance it is best that you specify options as recommended. See section 7.5.3 InnoDB Startup Options.
To enable InnoDB tables in MySQL version 3.23, see
section 7.5.2 InnoDB in MySQL Version 3.23.
In MySQL-4.0 you do are not required to do anything specific to
enable InnoDB tables.
The default behaviour is to create an auto-extending 10 MB file
`ibdata1' in the datadir of MySQL.
(In MySQL-4.0.0 and 4.0.1 the datafile is 64 MB and not auto-extending.)
Note: To get good performance you should explicitly set the InnoDB parameters listed in the following examples.
If you don't want to use InnoDB tables, you can add the
skip-innodb option to your MySQL option file.
Starting from versions 3.23.50 and 4.0.2 InnoDB allows the last
datafile on the innodb_data_file_path line
to be specified as auto-extending. The syntax for
innodb_data_file_path is then the following:
pathtodatafile:sizespecification;pathtodatafile:sizespecification;... ... ;pathtodatafile:sizespecification[:autoextend[:max:sizespecification]]
If you specify the last datafile with the autoextend option, InnoDB will extend the last datafile if it runs out of free space in the tablespace. The increment is 8 MB at a time. An example:
innodb_data_home_dir = innodb_data_file_path = /ibdata/ibdata1:100M:autoextend
instructs InnoDB to create just a single datafile whose initial size is
100 MB and which is extended in 8 MB blocks when space runs out.
If the disk becomes full you may want to add another data
file to another disk, for example. Then you have to look the size
of `ibdata1', round the size downward to
the closest multiple of 1024 * 1024 bytes (= 1 MB), and specify
the rounded size of `ibdata1' explicitly in
innodb_data_file_path.
After that you can add another datafile:
innodb_data_home_dir = innodb_data_file_path = /ibdata/ibdata1:988M;/disk2/ibdata2:50M:autoextend
Be cautious on filesystems where the maximum file-size is 2 GB. InnoDB is not aware of the OS maximum file-size. On those filesystems you might want to specify the max size for the datafile:
innodb_data_home_dir = innodb_data_file_path = /ibdata/ibdata1:100M:autoextend:max:2000M
A simple `my.cnf' example. Suppose you have a computer
with 128 MB RAM and one hard disk. Below is an example of
possible configuration parameters in `my.cnf' or
`my.ini' for InnoDB. We assume you are running
MySQL-Max-3.23.50 or later, or MySQL-4.0.2 or later.
This example suits most users, both on Unix and Windows,
who do not want to distribute InnoDB datafiles and
log files on several disks. This creates an
auto-extending data file `ibdata1' and two InnoDB log files
`ib_logfile0' and `ib_logfile1' to the
datadir of MySQL (typically `/mysql/data').
Also the small archived InnoDB log file
`ib_arch_log_0000000000' ends up in the datadir.
[mysqld] # You can write your other MySQL server options here # ... # Data file(s) must be able to # hold your data and indexes. # Make sure you have enough # free disk space. innodb_data_file_path = ibdata1:10M:autoextend # Set buffer pool size to # 50 - 80 % of your computer's # memory set-variable = innodb_buffer_pool_size=70M set-variable = innodb_additional_mem_pool_size=10M # Set the log file size to about # 25 % of the buffer pool size set-variable = innodb_log_file_size=20M set-variable = innodb_log_buffer_size=8M # Set ..flush_log_at_trx_commit # to 0 if you can afford losing # some last transactions innodb_flush_log_at_trx_commit=1
Check that the MySQL server has the rights to create files in
datadir.
Note that datafiles must be < 2 GB in some file systems! The combined size of the log files must be < 4 GB. The combined size of datafiles must be >= 10 MB.
When you for the first time create an InnoDB database, it is best that you start the MySQL server from the command prompt. Then InnoDB will print the information about the database creation to the screen, and you see what is happening. See below next section what the printout should look like. For example, in Windows you can start `mysqld-max.exe' with:
your-path-to-mysqld>mysqld-max --console
Where to put `my.cnf' or `my.ini' in Windows? The rules for Windows are the following:
SET
command of MS-DOS to print the value of WINDIR.
Where to specify options in Unix? On Unix `mysqld' reads options from the following files, if they exist, in the following order:
--defaults-extra-file=....
`COMPILATION_DATADIR' is the MySQL data directory which was
specified as a ./configure option when `mysqld'
was compiled
(typically `/usr/local/mysql/data' for a binary installation or `/usr/local/var' for a source installation).
If you are not sure from where `mysqld' reads its `my.cnf'
or `my.ini', you can give the path as the first command-line
option to the server:
mysqld --defaults-file=your_path_to_my_cnf.
InnoDB forms the directory path to a datafile by textually catenating
innodb_data_home_dir to a datafile name or path in
innodb_data_file_path, adding a possible slash or
backslash in between if needed. If the keyword
innodb_data_home_dir is not mentioned in
`my.cnf' at all, the default for it is the
'dot' directory `./' which means the datadir of MySQL.
An advanced `my.cnf' example. Suppose you have a Linux computer with 2 GB RAM and three 60 GB hard disks (at directory paths `/', `/dr2' and `/dr3'). Below is an example of possible configuration parameters in `my.cnf' for InnoDB.
Note that InnoDB does not create directories: you
have to create them yourself. Use the Unix or MS-DOS
mkdir command to create the data and log group home directories.
[mysqld] # You can write your other MySQL server options here # ... innodb_data_home_dir = # Data files must be able to # hold your data and indexes innodb_data_file_path = /ibdata/ibdata1:2000M;/dr2/ibdata/ibdata2:2000M:autoextend # Set buffer pool size to # 50 - 80 % of your computer's # memory, but make sure on Linux # x86 total memory usage is # < 2 GB set-variable = innodb_buffer_pool_size=1G set-variable = innodb_additional_mem_pool_size=20M innodb_log_group_home_dir = /dr3/iblogs # .._log_arch_dir must be the same # as .._log_group_home_dir innodb_log_arch_dir = /dr3/iblogs set-variable = innodb_log_files_in_group=3 # Set the log file size to about # 15 % of the buffer pool size set-variable = innodb_log_file_size=150M set-variable = innodb_log_buffer_size=8M # Set ..flush_log_at_trx_commit to # 0 if you can afford losing # some last transactions innodb_flush_log_at_trx_commit=1 set-variable = innodb_lock_wait_timeout=50 #innodb_flush_method=fdatasync #set-variable = innodb_thread_concurrency=5
Note that we have placed the two datafiles on different disks. InnoDB will fill the tablespace formed by the datafiles from bottom up. In some cases it will improve the performance of the database if all data is not placed on the same physical disk. Putting log files on a different disk from data is very often beneficial for performance. You can also use raw disk partitions (raw devices) as datafiles. In some Unixes they speed up I/O. See the manual section on InnoDB file space management about how to specify them in `my.cnf'.
Warning: on Linux x86 you must be careful you do not set memory usage too high. glibc will allow the process heap to grow over thread stacks, which will crash your server. It is a risk if the value of
innodb_buffer_pool_size + key_buffer + max_connections * (sort_buffer + read_buffer_size) + max_connections * 2 MB
is close to 2 GB or exceeds 2 GB. Each thread will use a stack
(often 2 MB, but in MySQL AB binaries only 256 KB) and in the worst case also
sort_buffer + read_buffer_size
additional memory.
How to tune other `mysqld' server parameters? Typical values which suit most users are:
skip-locking set-variable = max_connections=200 set-variable = read_buffer_size=1M set-variable = sort_buffer=1M # Set key_buffer to 5 - 50% # of your RAM depending on how # much you use MyISAM tables, but # keep key_buffer + InnoDB # buffer pool size < 80% of # your RAM set-variable = key_buffer=...
Note that some parameters are given using the numeric `my.cnf'
parameter format: set-variable = innodb... = 123, others
(string and boolean parameters) with another format:
innodb_... = ... .
The meanings of the configuration parameters are the following:
| Option | Description |
innodb_data_home_dir |
The common part of the directory path for all InnoDB datafiles.
If you do not mentioned this option in `my.cnf'
the default is the datadir of MySQL.
You can specify this also as an empty string, in which case you
can use absolute file paths in innodb_data_file_path.
|
innodb_data_file_path | Paths to individual datafiles and their sizes. The full directory path to each datafile is acquired by concatenating innodb_data_home_dir to the paths specified here. The file sizes are specified in megabytes, hence the 'M' after the size specification above. InnoDB also understands the abbreviation 'G', 1 G meaning 1024 MB. Starting from 3.23.44 you can set the file-size bigger than 4 GB on those operating systems which support big files. On some operating systems files must be < 2 GB. The sum of the sizes of the files must be at least 10 MB. |
innodb_mirrored_log_groups | Number of identical copies of log groups we keep for the database. Currently this should be set to 1. |
innodb_log_group_home_dir | Directory path to InnoDB log files. |
innodb_log_files_in_group | Number of log files in the log group. InnoDB writes to the files in a circular fashion. Value 3 is recommended here. |
innodb_log_file_size | Size of each log file in a log group in megabytes. Sensible values range from 1M to 1/nth of the size of the buffer pool specified below, where n is the number of log files in the group. The bigger the value, the less checkpoint flush activity is needed in the buffer pool, saving disk I/O. But bigger log files also mean that recovery will be slower in case of a crash. The combined size of log files must be < 4 GB on 32-bit computers. |
innodb_log_buffer_size | The size of the buffer which InnoDB uses to write log to the log files on disk. Sensible values range from 1M to 8M. A big log buffer allows large transactions to run without a need to write the log to disk until the transaction commit. Thus, if you have big transactions, making the log buffer big will save disk I/O. |
innodb_flush_log_at_trx_commit | Normally you set this to 1, meaning that at a transaction commit the log is flushed to disk, and the modifications made by the transaction become permanent, and survive a database crash. If you are willing to compromise this safety, and you are running small transactions, you may set this to 0 or 2 to reduce disk I/O to the logs. Value 0 means that the log is only written to the log file and the log file flushed to disk approximately once per second. Value 2 means the log is written to the log file at each commit, but the log file is only flushed to disk approximately once per second. The default value is 1 starting from MySQL-4.0.13, previously it was 0. |
innodb_log_arch_dir |
The directory where fully written log files would be archived if we used
log archiving. The value of this parameter should currently be set the
same as innodb_log_group_home_dir.
|
innodb_log_archive | This value should currently be set to 0. As recovery from a backup is done by MySQL using its own log files, there is currently no need to archive InnoDB log files. |
innodb_buffer_pool_size | The size of the memory buffer InnoDB uses to cache data and indexes of its tables. The bigger you set this the less disk I/O is needed to access data in tables. On a dedicated database server you may set this parameter up to 80% of the machine physical memory size. Do not set it too large, though, because competition of the physical memory may cause paging in the operating system. |
innodb_buffer_pool_awe_mem_mb | Size of the buffer pool in MB, if it is placed in the AWE memory of 32-bit Windows. Available starting from 4.1.0 and only relevant in 32-bit Windows. If your 32-bit Windows operating system supports > 4 GB memory, so-called Address Windowing Extensions, you can allocate the InnoDB buffer pool into the AWE physical memory using this parameter. The maximum possible value for this is 64000. If this parameter is specified, then innodb_buffer_pool_size is the window in the 32-bit address space of mysqld where InnoDB maps that AWE memory. A good value for innodb_buffer_pool_size is then 500M. |
innodb_additional_mem_pool_size | Size of a memory pool InnoDB uses to store data dictionary information and other internal data structures. A sensible value for this might be 2M, but the more tables you have in your application the more you will need to allocate here. If InnoDB runs out of memory in this pool, it will start to allocate memory from the operating system, and write warning messages to the MySQL error log. |
innodb_file_io_threads | Number of file I/O threads in InnoDB. Normally, this should be 4, but on Windows disk I/O may benefit from a larger number. |
innodb_lock_wait_timeout |
Timeout in seconds an InnoDB transaction may wait for a lock before
being rolled back. InnoDB automatically detects transaction deadlocks
in its own lock table and rolls back the transaction. If you use
LOCK TABLES command, or other transaction-safe storage engines
than InnoDB in the same transaction, then a deadlock may arise which
InnoDB cannot notice. In cases like this the timeout is useful to
resolve the situation.
|
innodb_flush_method |
(Available from 3.23.40 up.)
The default value for this is fdatasync.
Another option is O_DSYNC.
|
Suppose you have installed MySQL and have edited `my.cnf' so that it contains the necessary InnoDB configuration parameters. Before starting MySQL you should check that the directories you have specified for InnoDB datafiles and log files exist and that you have access rights to those directories. InnoDB cannot create directories, only files. Check also you have enough disk space for the data and log files.
When you now start MySQL, InnoDB will start creating your datafiles and log files. InnoDB will print something like the following:
~/mysqlm/sql > mysqld InnoDB: The first specified datafile /home/heikki/data/ibdata1 did not exist: InnoDB: a new database to be created! InnoDB: Setting file /home/heikki/data/ibdata1 size to 134217728 InnoDB: Database physically writes the file full: wait... InnoDB: datafile /home/heikki/data/ibdata2 did not exist: new to be created InnoDB: Setting file /home/heikki/data/ibdata2 size to 262144000 InnoDB: Database physically writes the file full: wait... InnoDB: Log file /home/heikki/data/logs/ib_logfile0 did not exist: new to be created InnoDB: Setting log file /home/heikki/data/logs/ib_logfile0 size to 5242880 InnoDB: Log file /home/heikki/data/logs/ib_logfile1 did not exist: new to be created InnoDB: Setting log file /home/heikki/data/logs/ib_logfile1 size to 5242880 InnoDB: Log file /home/heikki/data/logs/ib_logfile2 did not exist: new to be created InnoDB: Setting log file /home/heikki/data/logs/ib_logfile2 size to 5242880 InnoDB: Started mysqld: ready for connections
A new InnoDB database has now been created. You can connect to the MySQL
server with the usual MySQL client programs like mysql.
When you shut down the MySQL server with `mysqladmin shutdown',
InnoDB output will be like the following:
010321 18:33:34 mysqld: Normal shutdown 010321 18:33:34 mysqld: Shutdown Complete InnoDB: Starting shutdown... InnoDB: Shutdown completed
You can now look at the datafiles and logs directories and you will see the files created. The log directory will also contain a small file named `ib_arch_log_0000000000'. That file resulted from the database creation, after which InnoDB switched off log archiving. When MySQL is again started, the output will be like the following:
~/mysqlm/sql > mysqld InnoDB: Started mysqld: ready for connections
If InnoDB prints an operating system error in a file operation, usually the problem is one of the following:
innodb_data_home_dir
or innodb_data_file_path.
If something goes wrong in an InnoDB database creation, you should delete all files created by InnoDB. This means all datafiles, all log files, the small archived log file, and in the case you already did create some InnoDB tables, delete also the corresponding `.frm' files for these tables from the MySQL database directories. Then you can try the InnoDB database creation again.
Suppose you have started the MySQL client with the command
mysql test.
To create a table in the InnoDB format you must specify
TYPE = InnoDB in the table creation SQL command:
CREATE TABLE CUSTOMER (A INT, B CHAR (20), INDEX (A)) TYPE = InnoDB;
This SQL command will create a table and an index on column A
into the InnoDB tablespace consisting of the datafiles you specified
in `my.cnf'. In addition MySQL will create a file
`CUSTOMER.frm' to the MySQL database directory `test'.
Internally, InnoDB will add to its own data dictionary an entry
for table 'test/CUSTOMER'. Thus you can create a table
of the same name CUSTOMER in another database of MySQL, and
the table names will not collide inside InnoDB.
You can query the amount of free space in the InnoDB tablespace
by issuing the table status command of MySQL for any table you have
created with TYPE = InnoDB. Then the amount of free
space in the tablespace appears in the table comment section in the
output of SHOW. An example:
SHOW TABLE STATUS FROM test LIKE 'CUSTOMER'
Note that the statistics SHOW gives about InnoDB tables
are only approximate: they are used in SQL optimisation. Table and
index reserved sizes in bytes are accurate, though.
InnoDB does not have a special optimisation for separate index creation.
Therefore it does not pay to export and import the table and create indexes
afterwards.
The fastest way to alter a table to InnoDB is to do the inserts
directly to an InnoDB table, that is, use ALTER TABLE ... TYPE=INNODB,
or create an empty InnoDB table with identical definitions and insert
the rows with INSERT INTO ... SELECT * FROM ....
To get better control over the insertion process, it may be good to insert big tables in pieces:
INSERT INTO newtable SELECT * FROM oldtable WHERE yourkey > something AND yourkey <= somethingelse;
After all data has been inserted you can rename the tables.
During the conversion of big tables you should set the InnoDB buffer pool size big to reduce disk I/O. Not bigger than 80% of the physical memory, though. You should set InnoDB log files big, and also the log buffer large.
Make sure you do not run out of tablespace: InnoDB tables take a lot
more space than MyISAM tables. If an ALTER TABLE runs out
of space, it will start a rollback, and that can take hours if it is
disk-bound.
In inserts InnoDB uses the insert buffer to merge secondary index records
to indexes in batches. That saves a lot of disk I/O. In rollback no such
mechanism is used, and the rollback can take 30 times longer than the
insertion.
In the case of a runaway rollback, if you do not have valuable data in your database, it is better that you kill the database process and delete all InnoDB data and log files and all InnoDB table `.frm' files, and start your job again, rather than wait for millions of disk I/Os to complete.
Starting from version 3.23.43b InnoDB features foreign key constraints. InnoDB is the first MySQL table type which allows you to define foreign key constraints to guard the integrity of your data.
The syntax of a foreign key constraint definition in InnoDB:
[CONSTRAINT symbol] FOREIGN KEY (index_col_name, ...)
REFERENCES table_name (index_col_name, ...)
[ON DELETE {CASCADE | SET NULL | NO ACTION
| RESTRICT}]
[ON UPDATE {CASCADE | SET NULL | NO ACTION
| RESTRICT}]
Both tables have to be InnoDB type and there must be an index where the foreign key and the referenced key are listed as the FIRST columns. InnoDB does not auto-create indexes on foreign keys or referenced keys: you have to create them explicitly.
Corresponding columns in the foreign key
and the referenced key must have similar internal data types
inside InnoDB so that they can be compared without a type
conversion.
The size and the signedness of integer types has to be the same.
The length of string types need not be the same.
If you specify a SET NULL action, make sure you
have not declared the columns in the child table
NOT NULL.
If MySQL gives the error number 1005 from a CREATE TABLE
statement, and the error message string refers to errno 150, then
the table creation failed because a foreign key constraint was not
correctly formed.
Similarly, if an ALTER TABLE fails and it refers to errno
150, that means a foreign key definition would be incorrectly
formed for the altered table. Starting from version 4.0.13,
you can use SHOW INNODB STATUS to look at a detailed explanation
of the latest InnoDB foreign key error in the server.
Starting from version 3.23.50, InnoDB does not check foreign key constraints on those foreign key or referenced key values which contain a NULL column.
A deviation from SQL standards: if in the parent table
there are several rows which have the same referenced key value,
then InnoDB acts in foreign key checks like the other parent
rows with the same key value would not exist. For example,
if you have defined a RESTRICT type constraint, and there
is a child row with several parent rows, InnoDB does not allow
the deletion of any of those parent rows.
Starting from version 3.23.50, you can also associate the
ON DELETE CASCADE or ON DELETE SET NULL clause with
the foreign key constraint. Corresponding ON UPDATE options
are available starting from 4.0.8. If ON DELETE CASCADE is
specified, and a row in the parent table is deleted, then InnoDB
automatically deletes also all those rows in the child table
whose foreign key values are equal to the referenced key value in
the parent row. If ON DELETE SET NULL is specified, the
child rows are automatically updated so that the columns in the
foreign key are set to the SQL NULL value.
A deviation from SQL standards: if
ON UPDATE CASCADE or ON UPDATE SET NULL recurses to
update the SAME TABLE it has already updated during the cascade,
it acts like RESTRICT. This is to prevent infinite loops
resulting from cascaded updates. A self-referential ON DELETE
SET NULL, on the other hand, works starting from 4.0.13.
A self-referential ON DELETE CASCADE has always worked.
An example:
CREATE TABLE parent(id INT NOT NULL, PRIMARY KEY (id)) TYPE=INNODB;
CREATE TABLE child(id INT, parent_id INT, INDEX par_ind (parent_id),
FOREIGN KEY (parent_id) REFERENCES parent(id)
ON DELETE SET NULL
) TYPE=INNODB;
Starting from version 3.23.50 InnoDB allows you to add a new foreign key constraint to a table through
ALTER TABLE yourtablename ADD [CONSTRAINT symbol] FOREIGN KEY (...) REFERENCES anothertablename(...) [on_delete_and_on_update_actions]
Remember to create the required indexes first, though.
Starting from version 4.0.13, InnoDB supports
ALTER TABLE DROP FOREIGN KEY internally_generated_foreign_key_id
You have to use SHOW CREATE TABLE to look the internally
generated foreign key id when you want to drop a foreign key.
In InnoDB versions < 3.23.50 ALTER TABLE
or CREATE INDEX
should not be used in connection with tables which have foreign
key constraints or which are referenced in foreign key constraints:
Any ALTER TABLE removes all foreign key
constrainst defined for the table. You should not use
ALTER TABLE to the referenced table either, but
use DROP TABLE and CREATE TABLE to modify the
schema. When MySQL does an ALTER TABLE it may internally
use RENAME TABLE, and that will confuse the
foreign key costraints which refer to the table.
A CREATE INDEX statement is in MySQL
processed as an ALTER TABLE, and these
restrictions apply also to it.
When doing foreign key checks, InnoDB sets shared row level locks on child or parent records it has to look at. InnoDB checks foreign key constraints immediately: the check is not deferred to transaction commit.
If you want to ignore foreign key constraints during, for example for a
LOAD DATA operation, you can do SET FOREIGN_KEY_CHECKS=0.
InnoDB allows you to drop any table even though that would break the foreign key constraints which reference the table. When you drop a table the constraints which were defined in its create statement are also dropped.
If you re-create a table which was dropped, it has to have a definition which conforms to the foreign key constraints referencing it. It must have the right column names and types, and it must have indexes on the referenced keys, as stated above. If these are not satisfied, MySQL returns error number 1005 and refers to errno 150 in the error message string.
Starting from version 3.23.50 InnoDB returns the foreign key definitions of a table when you call
SHOW CREATE TABLE yourtablename
Then also `mysqldump' produces correct definitions of tables to the dump file, and does not forget about the foreign keys.
You can also list the foreign key constraints for a table
T with
SHOW TABLE STATUS FROM yourdatabasename LIKE 'T'
The foreign key constraints are listed in the table comment of the output.
From version 3.23.50 and 4.0.2 you can specify the last InnoDB datafile
to autoextend. Alternatively, you can increase to your tablespace
by specifying an additional datafile. To do this you have to shut down
the MySQL server, edit the `my.cnf' file adding a new datafile
to innodb_data_file_path, and then start the MySQL server again.
Currently you cannot remove a datafile from InnoDB. To decrease the size of your database you have to use `mysqldump' to dump all your tables, create a new database, and import your tables to the new database.
If you want to change the number or the size of your InnoDB log files, you have to shut down MySQL and make sure that it shuts down without errors. Then copy the old log files into a safe place just in case something went wrong in the shutdown and you will need them to recover the database. Delete then the old log files from the log file directory, edit `my.cnf', and start MySQL again. InnoDB will tell you at the startup that it is creating new log files.
The key to safe database management is taking regular backups.
InnoDB Hot Backup is an online backup tool you can use to backup your InnoDB database while it is running. InnoDB Hot Backup does not require you to shut down your database and it does not set any locks or disturb your normal database processing. InnoDB Hot Backup is a non-free additional tool which is not included in the standard MySQL distribution. See the InnoDB Hot Backup homepage http://www.innodb.com/hotbackup.html for detailed information and screenshots.
If you are able to shut down your MySQL server, then to take a 'binary' backup of your database you have to do the following:
In addition to taking the binary backups described above, you should also regularly take dumps of your tables with `mysqldump'. The reason to this is that a binary file may be corrupted without you noticing it. Dumped tables are stored into text files which are human-readable and much simpler than database binary files. Seeing table corruption from dumped files is easier, and since their format is simpler, the chance for serious data corruption in them is smaller.
A good idea is to take the dumps at the same time you take a binary backup of your database. You have to shut out all clients from your database to get a consistent snapshot of all your tables into your dumps. Then you can take the binary backup, and you will then have a consistent snapshot of your database in two formats.
To be able to recover your InnoDB database to the present from the binary backup described above, you have to run your MySQL database with the general logging and log archiving of MySQL switched on. Here by the general logging we mean the logging mechanism of the MySQL server which is independent of InnoDB logs.
To recover from a crash of your MySQL server process, the only thing you have to do is to restart it. InnoDB will automatically check the logs and perform a roll-forward of the database to the present. InnoDB will automatically roll back uncommitted transactions which were present at the time of the crash. During recovery, InnoDB will print out something like the following:
~/mysqlm/sql > mysqld InnoDB: Database was not shut down normally. InnoDB: Starting recovery from log files... InnoDB: Starting log scan based on checkpoint at InnoDB: log sequence number 0 13674004 InnoDB: Doing recovery: scanned up to log sequence number 0 13739520 InnoDB: Doing recovery: scanned up to log sequence number 0 13805056 InnoDB: Doing recovery: scanned up to log sequence number 0 13870592 InnoDB: Doing recovery: scanned up to log sequence number 0 13936128 ... InnoDB: Doing recovery: scanned up to log sequence number 0 20555264 InnoDB: Doing recovery: scanned up to log sequence number 0 20620800 InnoDB: Doing recovery: scanned up to log sequence number 0 20664692 InnoDB: 1 uncommitted transaction(s) which must be rolled back InnoDB: Starting rollback of uncommitted transactions InnoDB: Rolling back trx no 16745 InnoDB: Rolling back of trx no 16745 completed InnoDB: Rollback of uncommitted transactions completed InnoDB: Starting an apply batch of log records to the database... InnoDB: Apply batch completed InnoDB: Started mysqld: ready for connections
If your database gets corrupted or your disk fails, you have to do the recovery from a backup. In the case of corruption, you should first find a backup which is not corrupted. From a backup do the recovery from the general log files of MySQL according to instructions in the MySQL manual.
InnoDB implements a checkpoint mechanism called a fuzzy checkpoint. InnoDB will flush modified database pages from the buffer pool in small batches, there is no need to flush the buffer pool in one single batch, which would in practice stop processing of user SQL statements for a while.
In crash recovery InnoDB looks for a checkpoint label written to the log files. It knows that all modifications to the database before the label are already present on the disk image of the database. Then InnoDB scans the log files forward from the place of the checkpoint applying the logged modifications to the database.
InnoDB writes to the log files in a circular fashion. All committed modifications which make the database pages in the buffer pool different from the images on disk must be available in the log files in case InnoDB has to do a recovery. This means that when InnoDB starts to reuse a log file in the circular fashion, it has to make sure that the database page images on disk already contain the modifications logged in the log file InnoDB is going to reuse. In other words, InnoDB has to make a checkpoint and often this involves flushing of modified database pages to disk.
The above explains why making your log files very big may save disk I/O in checkpointing. It can make sense to set the total size of the log files as big as the buffer pool or even bigger. The drawback in big log files is that crash recovery can last longer because there will be more log to apply to the database.
In Windows InnoDB stores the database names and table names internally always in lower case. To move databases in a binary format from Unix to Windows or from Windows to Unix you should have all table and database names in lower case.
InnoDB data and log files are binary-compatible on all platforms
if the floating-point number format on the machines is the same.
You can move an InnoDB database simply by copying all the relevant
files, which we already listed in the previous section on backing up
a database. If the floating-point formats on the machines are
different but you have not used FLOAT or DOUBLE
data types in your tables then the procedure is the same: just copy
the relevant files. If the formats are different and your tables
contain floating-point data, you have to use `mysqldump'
and `mysqlimport' to move those tables.
A performance tip is to switch off auto-commit mode when you import data into your database, assuming your tablespace has enough space for the big rollback segment the big import transaction will generate. Do the commit only after importing a whole table or a segment of a table.
In the InnoDB transaction model the goal has been to combine the best properties of a multi-versioning database to traditional two-phase locking. InnoDB does locking on row level and runs queries by default as non-locking consistent reads, in the style of Oracle. The lock table in InnoDB is stored so space-efficiently that lock escalation is not needed: typically several users are allowed to lock every row in the database, or any random subset of the rows, without InnoDB running out of memory.
In InnoDB all user activity happens inside transactions. If the autocommit mode is used in MySQL, then each SQL statement will form a single transaction. MySQL always starts a new connection with the autocommit mode switched on.
If the autocommit mode is
switched off with SET AUTOCOMMIT = 0,
then we can think that a user always has a transaction
open. If he issues
the SQL COMMIT or ROLLBACK statement,
it ends the current transaction, and a new starts. Both statements
will release all InnoDB locks that were set during the
current transaction. A COMMIT means that the
changes made in the current transaction are made permanent
and become visible to other users. A ROLLBACK,
on the other hand, cancels all modifications made by the current
transaction.
If the connection has AUTOCOMMIT = 1, then the user
can still perform a multi-statement transaction by starting it with
BEGIN and ending it with COMMIT
or ROLLBACK.
In terms of the SQL-92 transaction isolation levels,
the InnoDB default is REPEATABLE READ.
Starting from version 4.0.5, InnoDB offers all 4 different
transaction isolation levels described by the SQL-92 standard.
You can set the default isolation level for all connections
in the [mysqld] section of `my.cnf':
transaction-isolation = {READ-UNCOMMITTED | READ-COMMITTED
| REPEATABLE-READ | SERIALIZABLE}
A user can change the isolation level of a single session or all new incoming connections with the
SET [SESSION | GLOBAL] TRANSACTION ISOLATION LEVEL
{READ UNCOMMITTED | READ COMMITTED
| REPEATABLE READ | SERIALIZABLE}
SQL statement. Note that there are no hyphens in level names
in the SQL syntax.
If you specify the keyword GLOBAL
in the above statement, it will determine the initial
isolation level of new incoming connections, but will not change
the isolation level of old connections.
Any user is free to change the isolation level of his session, even
in the middle of a transaction.
In versions < 3.23.50 SET TRANSACTION had no effect
on InnoDB tables. In versions < 4.0.5 only REPEATABLE READ
and SERIALIZABLE were available.
You can query the global and session transaction isolation levels with:
SELECT @@global.tx_isolation; SELECT @@tx_isolation;
In row level locking InnoDB uses so-called next-key locking. That means that besides index records, InnoDB can also lock the 'gap' before an index record to block insertions by other users immediately before the index record. A next-key lock means a lock which locks an index record and the gap before it. A gap lock means a lock which only locks a gap before some index record.
A detailed description of each isolation level in InnoDB:
READ UNCOMMITTED This is also called
'dirty read': non-locking SELECTs are performed
so that we do not look at a possible earlier version of a record;
thus they are not 'consistent' reads under this isolation level;
otherwise this level works like READ COMMITTED.
READ COMMITTED
Somewhat Oracle-like isolation level.
All SELECT ... FOR UPDATE and
SELECT ... LOCK IN SHARE MODE
statements
only lock the index records, NOT the gaps before them, and
thus allow free inserting of new records next to locked
records.
UPDATE and DELETE which use
a unique index with a unique search condition,
only lock the index record found, not the gap before it.
But still in range type
UPDATE and DELETE InnoDB
must set next-key or gap locks and block insertions
by other users to the
gaps covered by the range. This is necessary
since 'phantom rows' have to be blocked for MySQL
replication and recovery to work.
Consistent reads behave like in
Oracle: each consistent read, even within the same
transaction, sets and reads its own fresh snapshot.
REPEATABLE READ This is the default isolation level of
InnoDB.
SELECT ... FOR UPDATE, SELECT ... LOCK IN SHARE MODE,
UPDATE, and DELETE
which use
a unique index with a unique search condition,
only lock the index record found, not the gap before it.
Otherwise these operations employ next-key locking, locking
the index range scanned with next-key or gap locks, and
block new insertions by other users.
In consistent reads there is an important difference
from the previous isolation level: in this level
all consistent reads within the same transaction read the
same snapshot established by the first read. This convention
means that if you issue several plain SELECTs
within the same transaction, these SELECTs are
consistent also with respect to each other.
SERIALIZABLE This level is like
the previous one, but
all plain SELECTs are implicitly converted to
SELECT ... LOCK IN SHARE MODE.
A consistent read means that InnoDB uses its multi-versioning to present to a query a snapshot of the database at a point in time. The query will see the changes made by exactly those transactions that committed before that point of time, and no changes made by later or uncommitted transactions. The exception to this rule is that the query will see the changes made by the transaction itself which issues the query.
If you are running with the default REPEATABLE READ isolation level,
then all consistent reads within the same transaction read the snapshot
established by the first such read in that transaction. You can get a
fresher snapshot for your queries by committing the current transaction
and after that issuing new queries.
Consistent read is the default mode in which InnoDB processes
SELECT statements in READ COMMITTED and
REPEATABLE READ isolation levels. A consistent read
does not set any locks on the tables it accesses, and
therefore other users are free to modify those tables at
the same time a consistent read is being performed on the table.
A consistent read is not convenient in some circumstances.
Suppose you want to add a new row into your table CHILD,
and make sure that the child already has a parent in table
PARENT.
Suppose you use a consistent read to read the table PARENT
and indeed see the parent of the child in the table. Can you now safely
add the child row to table CHILD? No, because it may
happen that meanwhile some other user has deleted the parent row
from the table PARENT, and you are not aware of that.
The solution is to perform the SELECT in a locking
mode, LOCK IN SHARE MODE.
SELECT * FROM PARENT WHERE NAME = 'Jones' LOCK IN SHARE MODE;
Performing a read in share mode means that we read the latest
available data, and set a shared mode lock on the rows we read.
If the latest data belongs to a yet uncommitted transaction of another
user, we will wait until that transaction commits.
A shared mode lock prevents others from updating or deleting
the row we have read. After we see that the above query returns
the parent 'Jones', we can safely add his child
to table CHILD, and commit our transaction.
This example shows how to implement referential
integrity in your application code.
Let us look at another example: we have an integer counter field in
a table CHILD_CODES which we use to assign
a unique identifier to each child we add to table CHILD.
Obviously, using a consistent read or a shared mode read
to read the present value of the counter is not a good idea, since
then two users of the database may see the same value for the
counter, and we will get a duplicate key error when we add
the two children with the same identifier to the table.
In this case there are two good ways to implement the
reading and incrementing of the counter: (1) update the counter
first by incrementing it by 1 and only after that read it,
or (2) read the counter first with
a lock mode FOR UPDATE, and increment after that:
SELECT COUNTER_FIELD FROM CHILD_CODES FOR UPDATE; UPDATE CHILD_CODES SET COUNTER_FIELD = COUNTER_FIELD + 1;
A SELECT ... FOR UPDATE will read the latest
available data setting exclusive locks on each row it reads.
Thus it sets the same locks a searched SQL UPDATE would set
on the rows.
In row level locking InnoDB uses an algorithm called next-key locking. InnoDB does the row level locking so that when it searches or scans an index of a table, it sets shared or exclusive locks on the index records in encounters. Thus the row level locks are more precisely called index record locks.
The locks InnoDB sets on index records also affect the 'gap'
before that index record. If a user has a shared or exclusive
lock on record R in an index, then another user cannot insert
a new index record immediately before R in the index order.
This locking of gaps is done to prevent the so-called phantom
problem. Suppose I want to read and lock all children with identifier
bigger than 100 from table CHILD,
and update some field in the selected rows.
SELECT * FROM CHILD WHERE ID > 100 FOR UPDATE;
Suppose there is an index on table CHILD on column
ID. Our query will scan that index starting from
the first record where ID is bigger than 100.
Now, if the locks set on the index records would not lock out
inserts made in the gaps, a new child might meanwhile be
inserted to the table. If now I in my transaction execute
SELECT * FROM CHILD WHERE ID > 100 FOR UPDATE;
again, I will see a new child in the result set the query returns. This is against the isolation principle of transactions: a transaction should be able to run so that the data it has read does not change during the transaction. If we regard a set of rows as a data item, then the new 'phantom' child would break this isolation principle.
When InnoDB scans an index it can also lock the gap
after the last record in the index. Just that happens in the previous
example: the locks set by InnoDB will prevent any insert to
the table where ID would be bigger than 100.
You can use next-key locking to implement a uniqueness check in your application: if you read your data in share mode and do not see a duplicate for a row you are going to insert, then you can safely insert your row and know that the next-key lock set on the successor of your row during the read will prevent anyone meanwhile inserting a duplicate for your row. Thus the next-key locking allows you to 'lock' the non-existence of something in your table.
SELECT ... FROM ... : this is a consistent read, reading a
snapshot of the database and setting no locks.
SELECT ... FROM ... LOCK IN SHARE MODE : sets shared next-key locks
on all index records the read encounters.
SELECT ... FROM ... FOR UPDATE : sets exclusive next-key locks
on all index records the read encounters.
INSERT INTO ... VALUES (...) : sets an exclusive lock
on the inserted row; note that this lock is not a next-key lock
and does not prevent other users from inserting to the gap before the
inserted row. If a duplicate key error occurs, sets a shared lock
on the duplicate index record.
INSERT INTO T SELECT ... FROM S WHERE ... sets an exclusive
(non-next-key) lock on each row inserted into T. Does
the search on S as a consistent read, but sets shared next-key
locks on S if the MySQL logging is on. InnoDB has to set
locks in the latter case because in roll-forward recovery from a
backup every SQL statement has to be executed in exactly the same
way as it was done originally.
CREATE TABLE ... SELECT ... performs the SELECT
as a consistent read or with shared locks, like in the previous
item.
REPLACE is done like an insert if there is no collision
on a unique key. Otherwise, an exclusive next-key lock is placed
on the row which has to be updated.
UPDATE ... SET ... WHERE ... : sets an exclusive next-key
lock on every record the search encounters.
DELETE FROM ... WHERE ... : sets an exclusive next-key
lock on every record the search encounters.
FOREIGN KEY constraint is defined on a table,
any insert, update, or delete which requires checking of the constraint
condition sets shared record level locks on the records it
looks at to check the constraint. Also in the case where the
constraint fails, InnoDB sets these locks.
LOCK TABLES ... : sets table locks. In the implementation
the MySQL layer of code sets these locks. The automatic deadlock detection
of InnoDB cannot detect deadlocks where such table locks are involved:
see the following section.
Also, since MySQL does know about row level locks,
it is possible that you
get a table lock on a table where another user currently has row level
locks. But that does not put transaction integerity into danger.
See section 7.5.14 Restrictions on InnoDB Tables.
InnoDB automatically detects a deadlock of transactions and rolls back a transaction or transactions to prevent the deadlock. Starting from version 4.0.5, InnoDB will try to pick small transactions to roll back. The size of a transaction is determined by the number of rows it has inserted, updated, or deleted. Previous to 4.0.5, InnoDB always rolled back the transaction whose lock request was the last one to build a deadlock, that is, a cycle in the waits-for graph of transactions.
InnoDB cannot detect deadlocks where a lock set by a MySQL
LOCK TABLES statement is involved, or if a lock set
in another storage engine than InnoDB is involved. You have to resolve
these situations using innodb_lock_wait_timeout set in
`my.cnf'.
When InnoDB performs a complete rollback of a transaction, all the locks of the transaction are released. However, if just a single SQL statement is rolled back as a result of an error, some of the locks set by the SQL statement may be preserved. This is because InnoDB stores row locks in a format where it cannot afterwards know which was set by which SQL statement.
Suppose you are running on the default REPEATABLE READ isolation level.
When you issue a consistent read, that is, an ordinary SELECT
statement, InnoDB will give your transaction a timepoint according
to which your query sees the database. Thus, if transaction B deletes
a row and commits after your timepoint was assigned, then you will
not see the row deleted. Similarly with inserts and updates.
You can advance your timepoint by committing your transaction
and then doing another SELECT.
This is called multi-versioned concurrency control.
User A User B
SET AUTOCOMMIT=0; SET AUTOCOMMIT=0;
time
| SELECT * FROM t;
| empty set
| INSERT INTO t VALUES (1, 2);
|
v SELECT * FROM t;
empty set
COMMIT;
SELECT * FROM t;
empty set;
COMMIT;
SELECT * FROM t;
---------------------
| 1 | 2 |
---------------------
Thus user A sees the row inserted by B only when B has committed the insert, and A has committed his own transaction so that the timepoint is advanced past the commit of B.
If you want to see the ``freshest'' state of the database, you should use a locking read:
SELECT * FROM t LOCK IN SHARE MODE;
Deadlocks are a classic problem in transactional databases, but they are not dangerous, unless they are so frequent that you cannot run certain transactions at all. Normally you have to write your applications so that they are always prepared to re-issue a transaction if it gets rolled back because of a deadlock.
InnoDB uses automatic row level locking. You can get deadlocks even in the case of transactions which just insert or delete a single row. That is because these operations are not really 'atomic': they automatically set locks on the (possibly several) index records of the row inserted/deleted.
You can cope with deadlocks and reduce the number of them with the following tricks:
SHOW INNODB STATUS in MySQL versions >= 3.23.52 and >= 4.0.3
to determine the cause of the latest deadlock. That can help you to tune
your application to avoid deadlocks.
SELECT ... FOR UPDATE
or ... LOCK IN SHARE MODE, try using a lower isolation
level READ COMMITTED.
EXPLAIN SELECT to determine that MySQL picks
appropriate indexes for your queries.
SELECT to return data
from an old snapshot, do not add the clause FOR UPDATE
or LOCK IN SHARE MODE to it. Using READ COMMITTED
isolation level is good here, because each consistent read
within the same transaction reads from its own fresh snapshot.
LOCK TABLES t1 WRITE, t2 READ, ... ;
[do something with tables t1 and t2 here]; UNLOCK TABLES.
Table level locks make you transactions to queue nicely,
and deadlocks are avoided. Note that LOCK TABLES
implicitly starts a transaction, just like the command BEGIN,
and UNLOCK TABLES implicitly ends the transaction in a COMMIT.
1. If the Unix `top' or the Windows `Task Manager' shows that the CPU usage percentage with your workload is less than 70%, your workload is probably disk-bound. Maybe you are making too many transaction commits, or the buffer pool is too small. Making the buffer pool bigger can help, but do not set it bigger than 80% of physical memory.
2. Wrap several modifications into one transaction. InnoDB must flush the log to disk at each transaction commit, if that transaction made modifications to the database. Since the rotation speed of a disk is typically at most 167 revolutions/second, that constrains the number of commits to the same 167/second if the disk does not fool the operating system.
3.
If you can afford the loss of some latest committed transactions, you can
set the `my.cnf' parameter innodb_flush_log_at_trx_commit
to 0. InnoDB tries to flush the log once per second anyway,
though the flush is not guaranteed.
4. Make your log files big, even as big as the buffer pool. When InnoDB has written the log files full, it has to write the modified contents of the buffer pool to disk in a checkpoint. Small log files will cause many unnecessary disk writes. The drawback in big log files is that recovery time will be longer.
5. Also the log buffer should be quite big, say 8 MB.
6. (Relevant from 3.23.39 up.)
In some versions of Linux and Unix, flushing files to disk with the Unix
fdatasync and other similar methods is surprisingly slow.
The default method InnoDB uses is the fdatasync function.
If you are not satisfied with the database write performance, you may
try setting innodb_flush_method in `my.cnf'
to O_DSYNC, though O_DSYNC seems to be slower on most systems.
7. In importing data to InnoDB, make sure that MySQL does not have
autocommit=1 on. Then every insert requires a log flush to disk.
Put before your plain SQL import file line
SET AUTOCOMMIT=0;
and after it
COMMIT;
If you use the `mysqldump' option --opt, you will get dump
files which are fast to import also to an InnoDB table, even without wrapping
them to the above SET AUTOCOMMIT=0; ... COMMIT; wrappers.
8. Beware of big rollbacks of mass inserts: InnoDB uses the insert buffer to save disk I/O in inserts, but in a corresponding rollback no such mechanism is used. A disk-bound rollback can take 30 times the time of the corresponding insert. Killing the database process will not help because the rollback will start again at the database startup. The only way to get rid of a runaway rollback is to increase the buffer pool so that the rollback becomes CPU-bound and runs fast, or delete the whole InnoDB database.
9.
Beware also of other big disk-bound operations.
Use DROP TABLE or TRUNCATE (from MySQL-4.0 up) to empty a
table, not DELETE FROM yourtable.
10.
Use the multi-line INSERT to reduce
communication overhead between the client and the server if you need
to insert many rows:
INSERT INTO yourtable VALUES (1, 2), (5, 5);
This tip is of course valid for inserts into any table type, not just InnoDB.
Starting from version 3.23.41 InnoDB includes the InnoDB
Monitor which prints information on the InnoDB internal state.
When switched on, InnoDB Monitor
will make the MySQL server `mysqld' to print data
(note: the MySQL client will not print anything)
to the standard
output about once every 15 seconds. This data is useful in
performance tuning.
On Windows you must start mysqld-max
from a MS-DOS prompt
with the --standalone --console
options to direct the output to the MS-DOS prompt
window.
There is a separate innodb_lock_monitor which
prints the same information as innodb_monitor
plus information on locks set by each transaction.
The printed information includes data on:
You can start InnoDB Monitor through the following SQL command:
CREATE TABLE innodb_monitor(a INT) type = innodb;
and stop it by
DROP TABLE innodb_monitor;
The CREATE TABLE syntax is just a way to pass a command
to the InnoDB engine through the MySQL SQL parser: the created
table is not relevant at all for InnoDB Monitor. If you shut down
the database when the monitor is running, and you want to start
the monitor again, you have to drop the
table before you can issue a new CREATE TABLE
to start the monitor.
This syntax may change in a future release.
A sample output of the InnoDB Monitor:
================================ 010809 18:45:06 INNODB MONITOR OUTPUT ================================ -------------------------- LOCKS HELD BY TRANSACTIONS -------------------------- LOCK INFO: Number of locks in the record hash table 1294 LOCKS FOR TRANSACTION ID 0 579342744 TABLE LOCK table test/mytable trx id 0 582333343 lock_mode IX RECORD LOCKS space id 0 page no 12758 n bits 104 table test/mytable index PRIMARY trx id 0 582333343 lock_mode X Record lock, heap no 2 PHYSICAL RECORD: n_fields 74; 1-byte offs FALSE; info bits 0 0: len 4; hex 0001a801; asc ;; 1: len 6; hex 000022b5b39f; asc ";; 2: len 7; hex 000002001e03ec; asc ;; 3: len 4; hex 00000001; ... ----------------------------------------------- CURRENT SEMAPHORES RESERVED AND SEMAPHORE WAITS ----------------------------------------------- SYNC INFO: Sorry, cannot give mutex list info in non-debug version! Sorry, cannot give rw-lock list info in non-debug version! ----------------------------------------------------- SYNC ARRAY INFO: reservation count 6041054, signal count 2913432 4a239430 waited for by thread 49627477 op. S-LOCK file NOT KNOWN line 0 Mut ex 0 sp 5530989 r 62038708 sys 2155035; rws 0 8257574 8025336; rwx 0 1121090 1848344 ----------------------------------------------------- CURRENT PENDING FILE I/O'S -------------------------- Pending normal aio reads: Reserved slot, messages 40157658 4a4a40b8 Reserved slot, messages 40157658 4a477e28 ... Reserved slot, messages 40157658 4a4424a8 Reserved slot, messages 40157658 4a39ea38 Total of 36 reserved aio slots Pending aio writes: Total of 0 reserved aio slots Pending insert buffer aio reads: Total of 0 reserved aio slots Pending log writes or reads: Reserved slot, messages 40158c98 40157f98 Total of 1 reserved aio slots Pending synchronous reads or writes: Total of 0 reserved aio slots ----------- BUFFER POOL ----------- LRU list length 8034 Free list length 0 Flush list length 999 Buffer pool size in pages 8192 Pending reads 39 Pending writes: LRU 0, flush list 0, single page 0 Pages read 31383918, created 51310, written 2985115 ---------------------------- END OF INNODB MONITOR OUTPUT ============================ 010809 18:45:22 InnoDB starts purge 010809 18:45:22 InnoDB purged 0 pages
Some notes on the output:
UNIV_SYNC_DEBUG
defined in `univ.i'.
Since InnoDB is a multi-versioned database, it must keep information of old versions of rows in the tablespace. This information is stored in a data structure we call a rollback segment after an analogous data structure in Oracle.
InnoDB internally adds two fields to each row stored in the database. A 6-byte field tells the transaction identifier for the last transaction which inserted or updated the row. Also a deletion is internally treated as an update where a special bit in the row is set to mark it as deleted. Each row also contains a 7-byte field called the roll pointer. The roll pointer points to an undo log record written to the rollback segment. If the row was updated, then the undo log record contains the information necessary to rebuild the content of the row before it was updated.
InnoDB uses the information in the rollback segment to perform the undo operations needed in a transaction rollback. It also uses the information to build earlier versions of a row for a consistent read.
Undo logs in the rollback segment are divided into insert and update undo logs. Insert undo logs are only needed in transaction rollback and can be discarded as soon as the transaction commits. Update undo logs are used also in consistent reads, and they can be discarded only after there is no transaction present for which InnoDB has assigned a snapshot that in a consistent read could need the information in the update undo log to build an earlier version of a database row.
You must remember to commit your transactions regularly, also those transactions which only issue consistent reads. Otherwise InnoDB cannot discard data from the update undo logs, and the rollback segment may grow too big, filling up your tablespace.
The physical size of an undo log record in the rollback segment is typically smaller than the corresponding inserted or updated row. You can use this information to calculate the space need for your rollback segment.
In our multi-versioning scheme a row is not physically removed from the database immediately when you delete it with an SQL statement. Only when InnoDB can discard the update undo log record written for the deletion, it can also physically remove the corresponding row and its index records from the database. This removal operation is called a purge, and it is quite fast, usually taking the same order of time as the SQL statement which did the deletion.
MySQL stores its data dictionary information of tables
in `.frm'
files in database directories. But every InnoDB type table
also has its own entry in InnoDB internal data dictionaries
inside the tablespace. When MySQL drops a table or a database,
it has to delete both a `.frm' file or files, and
the corresponding entries inside the InnoDB data dictionary.
This is the reason why you cannot move InnoDB tables between
databases simply by moving the `.frm' files, and why
DROP DATABASE did not work for InnoDB type tables
in MySQL versions <= 3.23.43.
Every InnoDB table has a special index called the clustered index
where the data of the rows is stored. If you define a
PRIMARY KEY on your table, then the index of the primary key
will be the clustered index.
If you do not define a primary key for your table, InnoDB will internally generate a clustered index where the rows are ordered by the row id InnoDB assigns to the rows in such a table. The row id is a 6-byte field which monotonically increases as new rows are inserted. Thus the rows ordered by the row id will be physically in the insertion order.
Accessing a row through the clustered index is fast, because the row data will be on the same page where the index search leads us. In many databases the data is traditionally stored on a different page from the index record. If a table is large, the clustered index architecture often saves a disk I/O when compared to the traditional solution.
The records in non-clustered indexes (we also call them secondary indexes), in InnoDB contain the primary key value for the row. InnoDB uses this primary key value to search for the row from the clustered index. Note that if the primary key is long, the secondary indexes will use more space.
All indexes in InnoDB are B-trees where the index records are stored in the leaf pages of the tree. The default size of an index page is 16 KB. When new records are inserted, InnoDB tries to leave 1 / 16 of the page free for future insertions and updates of the index records.
If index records are inserted in a sequential (ascending or descending) order, the resulting index pages will be about 15/16 full. If records are inserted in a random order, then the pages will be 1/2 - 15/16 full. If the fillfactor of an index page drops below 1/2, InnoDB will try to contract the index tree to free the page.
It is a common situation in a database application that the primary key is a unique identifier and new rows are inserted in the ascending order of the primary key. Thus the insertions to the clustered index do not require random reads from a disk.
On the other hand, secondary indexes are usually non-unique and insertions happen in a relatively random order into secondary indexes. This would cause a lot of random disk I/Os without a special mechanism used in InnoDB.
If an index record should be inserted to a non-unique secondary index, InnoDB checks if the secondary index page is already in the buffer pool. If that is the case, InnoDB will do the insertion directly to the index page. But, if the index page is not found from the buffer pool, InnoDB inserts the record to a special insert buffer structure. The insert buffer is kept so small that it entirely fits in the buffer pool, and insertions can be made to it very fast.
The insert buffer is periodically merged to the secondary index trees in the database. Often we can merge several insertions on the same page in of the index tree, and hence save disk I/Os. It has been measured that the insert buffer can speed up insertions to a table up to 15 times.
If a database fits almost entirely in main memory, then the fastest way to perform queries on it is to use hash indexes. InnoDB has an automatic mechanism which monitors index searches made to the indexes defined for a table, and if InnoDB notices that queries could benefit from building of a hash index, such an index is automatically built.
But note that the hash index is always built based on an existing B-tree index on the table. InnoDB can build a hash index on a prefix of any length of the key defined for the B-tree, depending on what search pattern InnoDB observes on the B-tree index. A hash index can be partial: it is not required that the whole B-tree index is cached in the buffer pool. InnoDB will build hash indexes on demand to those pages of the index which are often accessed.
In a sense, through the adaptive hash index mechanism InnoDB adapts itself to ample main memory, coming closer to the architecture of main memory databases.
After a database startup, when a user first does an insert to a
table T
where an auto-increment column has been defined, and the user does not provide
an explicit value for the column, then InnoDB executes SELECT
MAX(auto-inc-column) FROM T, and assigns that value incremented
by one to the column and the auto-increment counter of the table.
We say that
the auto-increment counter for table T has been initialised.
InnoDB follows the same procedure in initializing the auto-increment counter for a freshly created table.
Note that if the user specifies in an insert the value 0 to the auto-increment column, then InnoDB treats the row like the value would not have been specified.
After the auto-increment counter has been initialised, if a user inserts a row where he explicitly specifies the column value, and the value is bigger than the current counter value, then the counter is set to the specified column value. If the user does not explicitly specify a value, then InnoDB increments the counter by one and assigns its new value to the column.
The auto-increment mechanism, when assigning values from the counter, bypasses locking and transaction handling. Therefore you may also get gaps in the number sequence if you roll back transactions which have got numbers from the counter.
The behaviour of auto-increment is not defined if a user gives a negative value to the column or if the value becomes bigger than the maximum integer that can be stored in the specified integer type.
In disk I/O InnoDB uses asynchronous I/O. On Windows NT it uses the native asynchronous I/O provided by the operating system. On Unix, InnoDB uses simulated asynchronous I/O built into InnoDB: InnoDB creates a number of I/O threads to take care of I/O operations, such as read-ahead. In a future version we will add support for simulated aio on Windows NT and native aio on those versions of Unix which have one.
On Windows NT InnoDB uses non-buffered I/O. That means that the disk pages InnoDB reads or writes are not buffered in the operating system file cache. This saves some memory bandwidth.
Starting from 3.23.41 InnoDB uses a novel file flush technique called doublewrite. It adds safety to crash recovery after an operating system crash or a power outage, and improves performance on most Unix flavors by reducing the need for fsync operations.
Doublewrite means that InnoDB before writing pages to a datafile first writes them to a contiguous tablespace area called the doublewrite buffer. Only after the write and the flush to the doublewrite buffer has completed, InnoDB writes the pages to their proper positions in the datafile. If the operating system crashes in the middle of a page write, InnoDB will in recovery find a good copy of the page from the doublewrite buffer.
Starting from 3.23.41
you can also use a raw disk partition as a datafile, though this has
not been tested yet. When you create a new datafile you have
to put the keyword newraw immediately after the data
file-size in innodb_data_file_path. The partition must be
>= than you specify as the size. Note that 1M in InnoDB is
1024 x 1024 bytes, while in disk specifications 1 MB usually means
1000 000 bytes.
innodb_data_file_path=hdd1:5Gnewraw;hdd2:2Gnewraw
When you start the database again you must change the keyword
to raw. Otherwise, InnoDB will write over your
partition!
innodb_data_file_path=hdd1:5Graw;hdd2:2Graw
By using a raw disk you can on some Unixes perform unbuffered I/O.
There are two read-ahead heuristics in InnoDB: sequential read-ahead and random read-ahead. In sequential read-ahead InnoDB notices that the access pattern to a segment in the tablespace is sequential. Then InnoDB will post in advance a batch of reads of database pages to the I/O system. In random read-ahead InnoDB notices that some area in a tablespace seems to be in the process of being fully read into the buffer pool. Then InnoDB posts the remaining reads to the I/O system.
The datafiles you define in the configuration file form the tablespace of InnoDB. The files are simply catenated to form the tablespace, there is no striping in use. Currently you cannot directly instruct where the space is allocated for your tables, except by using the following fact: from a newly created tablespace InnoDB will allocate space starting from the low end.
The tablespace consists of database pages whose default size is 16 KB. The pages are grouped into extents of 64 consecutive pages. The 'files' inside a tablespace are called segments in InnoDB. The name of the rollback segment is somewhat misleading because it actually contains many segments in the tablespace.
For each index in InnoDB we allocate two segments: one is for non-leaf nodes of the B-tree, the other is for the leaf nodes. The idea here is to achieve better sequentiality for the leaf nodes, which contain the data.
When a segment grows inside the tablespace, InnoDB allocates the first 32 pages to it individually. After that InnoDB starts to allocate whole extents to the segment. InnoDB can add to a large segment up to 4 extents at a time to ensure good sequentiality of data.
Some pages in the tablespace contain bitmaps of other pages, and therefore a few extents in an InnoDB tablespace cannot be allocated to segments as a whole, but only as individual pages.
When you issue a query SHOW TABLE STATUS FROM ... LIKE ...
to ask for available free space in the tablespace, InnoDB will
report the extents which are definitely free in the tablespace.
InnoDB always reserves some extents for clean-up and other internal
purposes; these reserved extents are not included in the free space.
When you delete data from a table, InnoDB will contract the corresponding B-tree indexes. It depends on the pattern of deletes if that frees individual pages or extents to the tablespace, so that the freed space is available for other users. Dropping a table or deleting all rows from it is guaranteed to release the space to other users, but remember that deleted rows can be physically removed only in a purge operation after they are no longer needed in transaction rollback or consistent read.
If there are random insertions or deletions in the indexes of a table, the indexes may become fragmented. By fragmentation we mean that the physical ordering of the index pages on the disk is not close to the alphabetical ordering of the records on the pages, or that there are many unused pages in the 64-page blocks which were allocated to the index.
It can speed up index scans if you
periodically use mysqldump to dump the table to
a text file, drop the table, and reload it from the dump.
Another way to do the defragmenting is to ALTER the table type to
MyISAM and back to InnoDB again.
Note that a MyISAM table must fit in a single file
on your operating system.
If the insertions to and index are always ascending and records are deleted only from the end, then the file space management algorithm of InnoDB guarantees that fragmentation in the index will not occur.
The error handling in InnoDB is not always the same as specified in the SQL standard. According to SQL-99, any error during an SQL statement should cause the rollback of that statement. InnoDB sometimes rolls back only part of the statement, or the whole transaction. The following list specifies the error handling of InnoDB.
'Table is full' error
and InnoDB rolls back the SQL statement.
INSERT INTO ... SELECT ....
This will probably change so that the SQL statement will be rolled
back if you have not specified the IGNORE option in your
statement.
mysql_install_db script.
SHOW TABLE STATUS does not give accurate statistics
on InnoDB tables, except for the physical size reserved by the table.
The row count is only a rough estimate used in SQL optimisation.
CREATE TABLE T (A CHAR(20), B INT, UNIQUE (A(5))) TYPE = InnoDB;If you create a non-unique index on a prefix of a column, InnoDB will create an index over the whole column.
INSERT DELAYED is not supported for InnoDB tables.
LOCK TABLES operation does not know of InnoDB
row level locks set in already completed SQL statements: this means that
you can get a table lock on a table even if there still exist transactions
of other users which have row level locks on the same table. Thus
your operations on the table may have to wait if they collide with
these locks of other users. Also a deadlock is possible. However,
this does not endanger transaction integrity, because the row level
locks set by InnoDB will always take care of the integrity.
Also, a table lock prevents other transactions from acquiring more
row level locks (in a conflicting lock mode) on the table.
BLOB or TEXT column.
DELETE FROM TABLE does not regenerate the table but instead
deletes all rows, one by one, which is not that fast. In future versions
of MySQL you can use TRUNCATE which is fast.
AUTO_INCREMENT column.
AUTO_INCREMENT column value in
InnoDB with CREATE TABLE ... AUTO_INCREMENT=...
(or ALTER TABLE ...). To set the value
insert a dummy row with a value one less, and delete that dummy row.
HANDLER SQL commands now work also for InnoDB
type tables. InnoDB does the HANDLER reads always as
consistent reads. HANDLER is a direct access path to read
individual indexes of tables. In some cases HANDLER can be
used as a substitute of server-side cursors.
Contact information of Innobase Oy, producer of the InnoDB engine. Web site: http://www.innodb.com/. E-mail: Heikki.Tuuri@innodb.com
phone: 358-9-6969 3250 (office) 358-40-5617367 (mobile) Innobase Oy Inc. World Trade Center Helsinki Aleksanterinkatu 17 P.O.Box 800 00101 Helsinki Finland
BDB or BerkeleyDB TablesBDB Tables
BerkeleyDB, available at http://www.sleepycat.com/ has provided
MySQL with a transactional storage engine. Support for this storage engine is
included in the MySQL source distribution starting from version 3.23.34 and is
activated in the MySQL-Max binary. This storage engine is typically called
BDB for short.
BDB tables may have a greater chance of surviving crashes and are also
capable of COMMIT and ROLLBACK operations on transactions.
The MySQL source distribution comes with a BDB distribution that has a
couple of small patches to make it work more smoothly with MySQL.
You can't use a non-patched BDB version with MySQL.
We at MySQL AB are working in close cooperation with Sleepycat to keep the quality of the MySQL/BDB interface high.
When it comes to supporting BDB tables, we are committed to help our
users to locate the problem and help creating a reproducible test case
for any problems involving BDB tables. Any such test case will be
forwarded to Sleepycat who in turn will help us find and fix the
problem. As this is a two-stage operation, any problems with BDB tables
may take a little longer for us to fix than for other storage engines.
However, as the BerkeleyDB code itself has been used by many other
applications than MySQL, we don't envision any big problems with
this. See section 1.4.1 Support Offered by MySQL AB.
BDB
If you have downloaded a binary version of MySQL that includes
support for BerkeleyDB, simply follow the instructions for installing a
binary version of MySQL.
See section 2.2.11 Installing a MySQL Binary Distribution. See section 4.7.5 mysqld-max, An Extended mysqld Server.
To compile MySQL with Berkeley DB support, download MySQL
Version 3.23.34 or newer and configure MySQL with the
--with-berkeley-db option. See section 2.3 Installing a MySQL Source Distribution.
cd /path/to/source/of/mysql-3.23.34 ./configure --with-berkeley-db
Please refer to the manual provided with the BDB distribution for
more updated information.
Even though Berkeley DB is in itself very tested and reliable, the MySQL interface is still considered gamma quality. We are actively improving and optimising it to get it stable very soon.
BDB startup options
If you are running with AUTOCOMMIT=0 then your changes in BDB
tables will not be updated until you execute COMMIT. Instead of commit
you can execute ROLLBACK to forget your changes. See section 6.7.1 BEGIN/COMMIT/ROLLBACK Syntax.
If you are running with AUTOCOMMIT=1 (the default), your changes
will be committed immediately. You can start an extended transaction with
the BEGIN WORK SQL command, after which your changes will not be
committed until you execute COMMIT (or decide to ROLLBACK
the changes).
The following options to mysqld can be used to change the behaviour of
BDB tables:
| Option | Description |
--bdb-home=directory | Base directory for BDB tables. This should be the same directory you use for --datadir.
|
--bdb-lock-detect=# | Berkeley lock detect. One of (DEFAULT, OLDEST, RANDOM, or YOUNGEST).
|
--bdb-logdir=directory | Berkeley DB log file directory. |
--bdb-no-sync | Don't synchronously flush logs. |
--bdb-no-recover | Don't start Berkeley DB in recover mode. |
--bdb-shared-data | Start Berkeley DB in multi-process mode (Don't use DB_PRIVATE when initialising Berkeley DB)
|
--bdb-tmpdir=directory | Berkeley DB temporary file directory. |
--skip-bdb | Disable usage of BDB tables.
|
-O bdb_max_lock=1000 | Set the maximum number of locks possible. See section 4.5.7.4 SHOW VARIABLES.
|
If you use --skip-bdb, MySQL will not initialise the
Berkeley DB library and this will save a lot of memory. Of course,
you cannot use BDB tables if you are using this option. If you try
to create a BDB table, MySQL will instead create a MyISAM table.
Normally you should start mysqld without --bdb-no-recover if you
intend to use BDB tables. This may, however, give you problems when you
try to start mysqld if the BDB log files are corrupted. See section 2.4.2 Problems Starting the MySQL Server.
With bdb_max_lock you can specify the maximum number of locks
(10000 by default) you can have active on a BDB table. You should
increase this if you get errors of type bdb: Lock table is out of
available locks or Got error 12 from ... when you have do long
transactions or when mysqld has to examine a lot of rows to
calculate the query.
You may also want to change binlog_cache_size and
max_binlog_cache_size if you are using big multi-line transactions.
See section 6.7.1 BEGIN/COMMIT/ROLLBACK Syntax.
BDB tables:BDB storage engine maintains
log files. For maximum performance you should place these on another disk
than your databases by using the --bdb-logdir option.
BDB log
file is started, and removes any log files that are not needed for
current transactions. One can also run FLUSH LOGS at any time
to checkpoint the Berkeley DB tables.
For disaster recovery, one should use table backups plus
MySQL's binary log. See section 4.4.1 Database Backups.
Warning: If you delete old log files that are in use, BDB will
not be able to do recovery at all and you may lose data if something
goes wrong.
PRIMARY KEY in each BDB table to be
able to refer to previously read rows. If you don't create one,
MySQL will create an maintain a hidden PRIMARY KEY for
you. The hidden key has a length of 5 bytes and is incremented for each
insert attempt.
BDB table are part of the same index or
part of the primary key, then MySQL can execute the query
without having to access the actual row. In a MyISAM table the
above holds only if the columns are part of the same index.
PRIMARY KEY will be faster than any other key, as the
PRIMARY KEY is stored together with the row data. As the other keys are
stored as the key data + the PRIMARY KEY, it's important to keep the
PRIMARY KEY as short as possible to save disk and get better speed.
LOCK TABLES works on BDB tables as with other tables. If
you don't use LOCK TABLE, MySQL will issue an internal
multiple-write lock on the table to ensure that the table will be
properly locked if another thread issues a table lock.
BDB tables is done on page level.
SELECT COUNT(*) FROM table_name is slow as BDB tables doesn't
maintain a count of the number of rows in the table.
MyISAM tables as the data in
BDB tables stored in B-trees and not in a separate datafile.
BDB table may make an automatic rollback and any
read may fail with a deadlock error.
MyISAM
tables. In other words, the key information will take a little more
space in BDB tables compared to MyISAM tables.
BDB table to allow you to insert new rows in
the middle of the key tree. This makes BDB tables somewhat larger than
MyISAM tables.
BDB table. If you don't
issue a lot of DELETE or ROLLBACK statements, this number
should be accurate enough for the MySQL optimiser, but as MySQL
only stores the number on close, it may be incorrect if MySQL dies
unexpectedly. It should not be fatal even if this number is not 100%
correct. One can update the number of rows by executing ANALYZE
TABLE or OPTIMIZE TABLE. See section 4.5.2 ANALYZE TABLE Syntax . See section 4.5.1 OPTIMIZE TABLE Syntax.
BDB table, you will get an error
(probably error 28) and the transaction should roll back. This is in
contrast with MyISAM and ISAM tables where mysqld will
wait for enough free disk before continuing.
BDB in the near future:BDB tables at the same time. If you are
going to use BDB tables, you should not have a very big table cache
(like >256) and you should use --no-auto-rehash with the mysql
client. We plan to partly fix this in 4.0.
SHOW TABLE STATUS doesn't yet provide that much information for
BDB
tables.
BDB
Currently we know that the BDB storage engine works with the following
operating systems:
It doesn't work with the following operating systems:
Note: The above list is not complete; we will update it as we receive more information.
If you build MySQL with support for BDB tables and get
the following error in the log file when you start mysqld:
bdb: architecture lacks fast mutexes: applications cannot be threaded Can't init dtabases
This means that BDB tables are not supported for your architecture.
In this case you must rebuild MySQL without BDB table support.
BDB Tables
Here follows the restrictions you have when using BDB tables:
BDB tables store in the `.db' file the path to the file as it was
created.
(This was done to be able to detect locks in a multi-user environment that
supports symlinks).
The effect of this is that BDB tables are not movable between directories!
BDB tables, you have to either use
mysqldump or take a backup of all table_name.db files and
the BDB log files. The BDB log files are the files in the base
data directory named log.XXXXXXXXXX (ten digits);
The BDB storage engine stores unfinished transactions in the log files
and requires these logs to be present when mysqld starts.
BDB Tableshostname.err log when
starting mysqld:
bdb: Ignoring log file: .../log.XXXXXXXXXX: unsupported log version #it means that the new
BDB version doesn't support the old log
file format. In this case you have to delete all BDB logs
from your database directory (the files with names that have the format
log.XXXXXXXXXX) and restart mysqld. We would also
recommend you to do a mysqldump --opt of your old BDB
tables, delete the old tables, and restore the dump.
001119 23:43:56 bdb: Missing log fileid entry
001119 23:43:56 bdb: txn_abort: Log undo failed for LSN:
1 3644744: Invalid
This is not fatal but we don't recommend that you delete tables if you are
not in auto-commit mode, until this problem is fixed (the fix is
not trivial).
This chapter describes the APIs available for MySQL, where to get them, and how to use them. The C API is the most extensively covered, as it was developed by the MySQL team, and is the basis for most of the other APIs.
The C API code is distributed with MySQL. It is included in the
mysqlclient library and allows C programs to access a database.
Many of the clients in the MySQL source distribution are
written in C. If you are looking for examples that demonstrate how to
use the C API, take a look at these clients. You can find these in the
clients directory in the MySQL source distribution.
Most of the other client APIs (all except Connector/J) use the mysqlclient
library to communicate with the MySQL server. This means that, for
example, you can take advantage of many of the same environment variables
that are used by other client programs, because they are referenced from the
library. See section 4.8 MySQL Client-Side Scripts and Utilities, for a list of these variables.
The client has a maximum communication buffer size. The size of the buffer that is allocated initially (16K bytes) is automatically increased up to the maximum size (the maximum is 16M). Because buffer sizes are increased only as demand warrants, simply increasing the default maximum limit does not in itself cause more resources to be used. This size check is mostly a check for erroneous queries and communication packets.
The communication buffer must be large enough to contain a single SQL
statement (for client-to-server traffic) and one row of returned data (for
server-to-client traffic). Each thread's communication buffer is dynamically
enlarged to handle any query or row up to the maximum limit. For example, if
you have BLOB values that contain up to 16M of data, you must have a
communication buffer limit of at least 16M (in both server and client). The
client's default maximum is 16M, but the default maximum in the server is
1M. You can increase this by changing the value of the
max_allowed_packet parameter when the server is started. See section 5.5.2 Tuning Server Parameters.
The MySQL server shrinks each communication buffer to
net_buffer_length bytes after each query. For clients, the size of
the buffer associated with a connection is not decreased until the connection
is closed, at which time client memory is reclaimed.
For programming with threads, see section 8.1.14 How to Make a Threaded Client. For creating a stand-alone application which includes the "server" and "client" in the same program (and does not communicate with an external MySQL server), see section 8.1.15 libmysqld, the Embedded MySQL Server Library.
MYSQL
MYSQL_RES
SELECT, SHOW, DESCRIBE, EXPLAIN). The
information returned from a query is called the result set in the
remainder of this section.
MYSQL_ROW
mysql_fetch_row().
MYSQL_FIELD
MYSQL_FIELD structures for each field by
calling mysql_fetch_field() repeatedly. Field values are not part of
this structure; they are contained in a MYSQL_ROW structure.
MYSQL_FIELD_OFFSET
mysql_field_seek().) Offsets are field numbers
within a row, beginning at zero.
my_ulonglong
mysql_affected_rows(),
mysql_num_rows(), and mysql_insert_id(). This type provides a
range of 0 to 1.84e19.
On some systems, attempting to print a value of type my_ulonglong
will not work. To print such a value, convert it to unsigned long
and use a %lu print format. Example:
printf ("Number of rows: %lu\n", (unsigned long) mysql_num_rows(result));
The MYSQL_FIELD structure contains the members listed here:
char * name
char * table
table value is an empty string.
char * def
mysql_list_fields().
enum enum_field_types type
type value may be one of the following:
| Type value | Type description |
FIELD_TYPE_TINY | TINYINT field
|
FIELD_TYPE_SHORT | SMALLINT field
|
FIELD_TYPE_LONG | INTEGER field
|
FIELD_TYPE_INT24 | MEDIUMINT field
|
FIELD_TYPE_LONGLONG | BIGINT field
|
FIELD_TYPE_DECIMAL | DECIMAL or NUMERIC field
|
FIELD_TYPE_FLOAT | FLOAT field
|
FIELD_TYPE_DOUBLE | DOUBLE or REAL field
|
FIELD_TYPE_TIMESTAMP | TIMESTAMP field
|
FIELD_TYPE_DATE | DATE field
|
FIELD_TYPE_TIME | TIME field
|
FIELD_TYPE_DATETIME | DATETIME field
|
FIELD_TYPE_YEAR | YEAR field
|
FIELD_TYPE_STRING | String (CHAR or VARCHAR) field
|
FIELD_TYPE_BLOB | BLOB or TEXT field (use max_length to determine the maximum length)
|
FIELD_TYPE_SET | SET field
|
FIELD_TYPE_ENUM | ENUM field
|
FIELD_TYPE_NULL | NULL-type field
|
FIELD_TYPE_CHAR | Deprecated; use FIELD_TYPE_TINY instead
|
IS_NUM() macro to test whether a field has a
numeric type. Pass the type value to IS_NUM() and it
will evaluate to TRUE if the field is numeric:
if (IS_NUM(field->type))
printf("Field is numeric\n");
unsigned int length
unsigned int max_length
mysql_store_result() or mysql_list_fields(), this contains the
maximum length for the field. If you use mysql_use_result(), the
value of this variable is zero.
unsigned int flags
flags value may have zero
or more of the following bits set:
| Flag value | Flag description |
NOT_NULL_FLAG | Field can't be NULL
|
PRI_KEY_FLAG | Field is part of a primary key |
UNIQUE_KEY_FLAG | Field is part of a unique key |
MULTIPLE_KEY_FLAG | Field is part of a non-unique key |
UNSIGNED_FLAG | Field has the UNSIGNED attribute
|
ZEROFILL_FLAG | Field has the ZEROFILL attribute
|
BINARY_FLAG | Field has the BINARY attribute
|
AUTO_INCREMENT_FLAG | Field has the AUTO_INCREMENT
attribute
|
ENUM_FLAG | Field is an ENUM (deprecated)
|
SET_FLAG | Field is a SET (deprecated)
|
BLOB_FLAG | Field is a BLOB or TEXT (deprecated)
|
TIMESTAMP_FLAG | Field is a TIMESTAMP (deprecated)
|
BLOB_FLAG, ENUM_FLAG, SET_FLAG, and
TIMESTAMP_FLAG flags is deprecated because they indicate the type of
a field rather than an attribute of its type. It is preferable to test
field->type against FIELD_TYPE_BLOB, FIELD_TYPE_ENUM,
FIELD_TYPE_SET, or FIELD_TYPE_TIMESTAMP instead.
The following example illustrates a typical use of the flags value:
if (field->flags & NOT_NULL_FLAG)
printf("Field can't be null\n");
You may use the following convenience macros to determine the boolean
status of the flags value:
| Flag status | Description |
IS_NOT_NULL(flags) | True if this field is defined as NOT NULL
|
IS_PRI_KEY(flags) | True if this field is a primary key |
IS_BLOB(flags) | True if this field is a BLOB or TEXT (deprecated; test field->type instead)
|
unsigned int decimals
The functions available in the C API are listed here and are described in greater detail in a later section. See section 8.1.3 C API Function Descriptions.
| Function | Description |
| mysql_affected_rows() |
Returns the number of rows changed/deleted/inserted by the last UPDATE,
DELETE, or INSERT query.
|
| mysql_change_user() | Changes user and database on an open connection. |
| mysql_character_set_name() | Returns the name of the default character set for the connection. |
| mysql_close() | Closes a server connection. |
| mysql_connect() |
Connects to a MySQL server. This function is deprecated; use
mysql_real_connect() instead.
|
| mysql_create_db() |
Creates a database. This function is deprecated; use the SQL command
CREATE DATABASE instead.
|
| mysql_data_seek() | Seeks to an arbitrary row in a query result set. |
| mysql_debug() |
Does a DBUG_PUSH with the given string.
|
| mysql_drop_db() |
Drops a database. This function is deprecated; use the SQL command
DROP DATABASE instead.
|
| mysql_dump_debug_info() | Makes the server write debug information to the log. |
| mysql_eof() |
Determines whether the last row of a result set has been read.
This function is deprecated; mysql_errno() or mysql_error()
may be used instead.
|
| mysql_errno() | Returns the error number for the most recently invoked MySQL function. |
| mysql_error() | Returns the error message for the most recently invoked MySQL function. |
| mysql_escape_string() | Escapes special characters in a string for use in a SQL statement. |
| mysql_fetch_field() | Returns the type of the next table field. |
| mysql_fetch_field_direct() | Returns the type of a table field, given a field number. |
| mysql_fetch_fields() | Returns an array of all field structures. |
| mysql_fetch_lengths() | Returns the lengths of all columns in the current row. |
| mysql_fetch_row() | Fetches the next row from the result set. |
| mysql_field_seek() | Puts the column cursor on a specified column. |
| mysql_field_count() | Returns the number of result columns for the most recent query. |
| mysql_field_tell() |
Returns the position of the field cursor used for the last
mysql_fetch_field().
|
| mysql_free_result() | Frees memory used by a result set. |
| mysql_get_client_info() | Returns client version information. |
| mysql_get_host_info() | Returns a string describing the connection. |
| mysql_get_server_version() | Returns version number of server as an integer (new in 4.1). |
| mysql_get_proto_info() | Returns the protocol version used by the connection. |
| mysql_get_server_info() | Returns the server version number. |
| mysql_info() | Returns information about the most recently executed query. |
| mysql_init() |
Gets or initialises a MYSQL structure.
|
| mysql_insert_id() |
Returns the ID generated for an AUTO_INCREMENT column by the previous
query.
|
| mysql_kill() | Kills a given thread. |
| mysql_list_dbs() | Returns database names matching a simple regular expression. |
| mysql_list_fields() | Returns field names matching a simple regular expression. |
| mysql_list_processes() | Returns a list of the current server threads. |
| mysql_list_tables() | Returns table names matching a simple regular expression. |
| mysql_num_fields() | Returns the number of columns in a result set. |
| mysql_num_rows() | Returns the number of rows in a result set. |
| mysql_options() |
Sets connect options for mysql_connect().
|
| mysql_ping() | Checks whether the connection to the server is working, reconnecting as necessary. |
| mysql_query() | Executes a SQL query specified as a null-terminated string. |
| mysql_real_connect() | Connects to a MySQL server. |
| mysql_real_escape_string() | Escapes special characters in a string for use in a SQL statement, taking into account the current charset of the connection. |
| mysql_real_query() | Executes a SQL query specified as a counted string. |
| mysql_reload() | Tells the server to reload the grant tables. |
| mysql_row_seek() |
Seeks to a row in a result set, using value returned from
mysql_row_tell().
|
| mysql_row_tell() | Returns the row cursor position. |
| mysql_select_db() | Selects a database. |
| mysql_sqlstate() | Returns the SQLSTATE error code for the last error. |
| mysql_shutdown() | Shuts down the database server. |
| mysql_stat() | Returns the server status as a string. |
| mysql_store_result() | Retrieves a complete result set to the client. |
| mysql_thread_id() | Returns the current thread ID. |
| mysql_thread_safe() | Returns 1 if the clients are compiled as thread-safe. |
| mysql_use_result() | Initiates a row-by-row result set retrieval. |
| mysql_commit() | Commits the transaction (new in 4.1). |
| mysql_rollback() | Rolls back the transaction (new in 4.1). |
| mysql_autocommit() | Toggles autocommit mode on/off (new in 4.1). |
| mysql_more_results() | Checks whether any more results exists (new in 4.1). |
| mysql_next_result() | Returns/Initiates the next result in multi-query executions (new in 4.1). |
To connect to the server, call mysql_init() to initialise a
connection handler, then call mysql_real_connect() with that
handler (along with other information such as the hostname, user name,
and password). Upon connection, mysql_real_connect() sets the
reconnect flag (part of the MYSQL structure) to a value of
1. This flag indicates, in the event that a query cannot be
performed because of a lost connection, to try reconnecting to the
server before giving up. When you are done with the connection, call
mysql_close() to terminate it.
While a connection is active, the client may send SQL queries to the server
using mysql_query() or mysql_real_query(). The difference
between the two is that mysql_query() expects the query to be
specified as a null-terminated string whereas mysql_real_query()
expects a counted string. If the string contains binary data (which may
include null bytes), you must use mysql_real_query().
For each non-SELECT query (for example, INSERT, UPDATE,
DELETE), you can find out how many rows were changed (affected)
by calling mysql_affected_rows().
For SELECT queries, you retrieve the selected rows as a result set.
(Note that some statements are SELECT-like in that they return rows.
These include SHOW, DESCRIBE, and EXPLAIN. They should
be treated the same way as SELECT statements.)
There are two ways for a client to process result sets. One way is to
retrieve the entire result set all at once by calling
mysql_store_result(). This function acquires from the server all the
rows returned by the query and stores them in the client. The second way is
for the client to initiate a row-by-row result set retrieval by calling
mysql_use_result(). This function initialises the retrieval, but does
not actually get any rows from the server.
In both cases, you access rows by calling mysql_fetch_row(). With
mysql_store_result(), mysql_fetch_row() accesses rows that have
already been fetched from the server. With mysql_use_result(),
mysql_fetch_row() actually retrieves the row from the server.
Information about the size of the data in each row is available by
calling mysql_fetch_lengths().
After you are done with a result set, call mysql_free_result()
to free the memory used for it.
The two retrieval mechanisms are complementary. Client programs should
choose the approach that is most appropriate for their requirements.
In practice, clients tend to use mysql_store_result() more
commonly.
An advantage of mysql_store_result() is that because the rows have all
been fetched to the client, you not only can access rows sequentially, you
can move back and forth in the result set using mysql_data_seek() or
mysql_row_seek() to change the current row position within the result
set. You can also find out how many rows there are by calling
mysql_num_rows(). On the other hand, the memory requirements for
mysql_store_result() may be very high for large result sets and you
are more likely to encounter out-of-memory conditions.
An advantage of mysql_use_result() is that the client requires less
memory for the result set because it maintains only one row at a time (and
because there is less allocation overhead, mysql_use_result() can be
faster). Disadvantages are that you must process each row quickly to avoid
tying up the server, you don't have random access to rows within the result
set (you can only access rows sequentially), and you don't know how many rows
are in the result set until you have retrieved them all. Furthermore, you
must retrieve all the rows even if you determine in mid-retrieval that
you've found the information you were looking for.
The API makes it possible for clients to respond appropriately to
queries (retrieving rows only as necessary) without knowing whether or
not the query is a SELECT. You can do this by calling
mysql_store_result() after each mysql_query() (or
mysql_real_query()). If the result set call succeeds, the query
was a SELECT and you can read the rows. If the result set call
fails, call mysql_field_count() to determine whether a
result was actually to be expected. If mysql_field_count()
returns zero, the query returned no data (indicating that it was an
INSERT, UPDATE, DELETE, etc.), and was not
expected to return rows. If mysql_field_count() is non-zero, the
query should have returned rows, but didn't. This indicates that the
query was a SELECT that failed. See the description for
mysql_field_count() for an example of how this can be done.
Both mysql_store_result() and mysql_use_result() allow you to
obtain information about the fields that make up the result set (the number
of fields, their names and types, etc.). You can access field information
sequentially within the row by calling mysql_fetch_field() repeatedly,
or by field number within the row by calling
mysql_fetch_field_direct(). The current field cursor position may be
changed by calling mysql_field_seek(). Setting the field cursor
affects subsequent calls to mysql_fetch_field(). You can also get
information for fields all at once by calling mysql_fetch_fields().
For detecting and reporting errors, MySQL provides access to error
information by means of the mysql_errno() and mysql_error()
functions. These return the error code or error message for the most
recently invoked function that can succeed or fail, allowing you to determine
when an error occurred and what it was.
In the descriptions here, a parameter or return value of NULL means
NULL in the sense of the C programming language, not a
MySQL NULL value.
Functions that return a value generally return a pointer or an integer.
Unless specified otherwise, functions returning a pointer return a
non-NULL value to indicate success or a NULL value to indicate
an error, and functions returning an integer return zero to indicate success
or non-zero to indicate an error. Note that ``non-zero'' means just that.
Unless the function description says otherwise, do not test against a value
other than zero:
if (result) /* correct */
... error ...
if (result < 0) /* incorrect */
... error ...
if (result == -1) /* incorrect */
... error ...
When a function returns an error, the Errors subsection of the
function description lists the possible types of errors. You can
find out which of these occurred by calling mysql_errno().
A string representation of the error may be obtained by calling
mysql_error().
mysql_affected_rows()
my_ulonglong mysql_affected_rows(MYSQL *mysql)
Returns the number of rows changed by the last UPDATE, deleted by
the last DELETE or inserted by the last INSERT
statement. May be called immediately after mysql_query() for
UPDATE, DELETE, or INSERT statements. For
SELECT statements, mysql_affected_rows() works like
mysql_num_rows().
An integer greater than zero indicates the number of rows affected or
retrieved. Zero indicates that no records where updated for an
UPDATE statement, no rows matched the WHERE clause in the
query or that no query has yet been executed. -1 indicates that the
query returned an error or that, for a SELECT query,
mysql_affected_rows() was called prior to calling
mysql_store_result().
None.
mysql_query(&mysql,"UPDATE products SET cost=cost*1.25 WHERE group=10");
printf("%ld products updated",(long) mysql_affected_rows(&mysql));
If one specifies the flag CLIENT_FOUND_ROWS when connecting to
mysqld, mysql_affected_rows() will return the number of
rows matched by the WHERE statement for UPDATE statements.
Note that when one uses a REPLACE command,
mysql_affected_rows() will return 2 if the new row replaced and
old row. This is because in this case one row was inserted after the
duplicate was deleted.
mysql_change_user()
my_bool mysql_change_user(MYSQL *mysql, const char *user, const
char *password, const char *db)
Changes the user and causes the database specified by db to
become the default (current) database on the connection specified by
mysql. In subsequent queries, this database is the default for
table references that do not include an explicit database specifier.
This function was introduced in MySQL Version 3.23.3.
mysql_change_user() fails unless the connected user can be
authenticated or if he doesn't have permission to use the database. In
this case the user and database are not changed
The db parameter may be set to NULL if you don't want to have a
default database.
Starting from MySQL 4.0.6 this command will always ROLLBACK any
active transactions, close all temporary tables, unlock all locked
tables and reset the state as if one had done a new connect.
This will happen even if the user didn't change.
Zero for success. Non-zero if an error occurred.
The same that you can get from mysql_real_connect().
CR_COMMANDS_OUT_OF_SYNC
CR_SERVER_GONE_ERROR
CR_SERVER_LOST
CR_UNKNOWN_ERROR
ER_UNKNOWN_COM_ERROR
ER_ACCESS_DENIED_ERROR
ER_BAD_DB_ERROR
ER_DBACCESS_DENIED_ERROR
ER_WRONG_DB_NAME
if (mysql_change_user(&mysql, "user", "password", "new_database"))
{
fprintf(stderr, "Failed to change user. Error: %s\n",
mysql_error(&mysql));
}
mysql_character_set_name()
const char *mysql_character_set_name(MYSQL *mysql)
Returns the default character set for the current connection.
The default character set
None.
mysql_close()
void mysql_close(MYSQL *mysql)
Closes a previously opened connection. mysql_close() also deallocates
the connection handle pointed to by mysql if the handle was allocated
automatically by mysql_init() or mysql_connect().
None.
None.
mysql_connect()
MYSQL *mysql_connect(MYSQL *mysql, const char *host, const char *user, const char *passwd)
This function is deprecated. It is preferable to use
mysql_real_connect() instead.
mysql_connect() attempts to establish a connection to a MySQL
database engine running on host. mysql_connect() must complete
successfully before you can execute any of the other API functions, with the
exception of mysql_get_client_info().
The meanings of the parameters are the same as for the corresponding
parameters for mysql_real_connect() with the difference that the
connection parameter may be NULL. In this case the C API
allocates memory for the connection structure automatically and frees it
when you call mysql_close(). The disadvantage of this approach is
that you can't retrieve an error message if the connection fails. (To
get error information from mysql_errno() or mysql_error(),
you must provide a valid MYSQL pointer.)
Same as for mysql_real_connect().
Same as for mysql_real_connect().
mysql_create_db()
int mysql_create_db(MYSQL *mysql, const char *db)
Creates the database named by the db parameter.
This function is deprecated. It is preferable to use mysql_query()
to issue a SQL CREATE DATABASE statement instead.
Zero if the database was created successfully. Non-zero if an error occurred.
CR_COMMANDS_OUT_OF_SYNC
CR_SERVER_GONE_ERROR
CR_SERVER_LOST
CR_UNKNOWN_ERROR
if(mysql_create_db(&mysql, "my_database"))
{
fprintf(stderr, "Failed to create new database. Error: %s\n",
mysql_error(&mysql));
}
mysql_data_seek()
void mysql_data_seek(MYSQL_RES *result, my_ulonglong offset)
Seeks to an arbitrary row in a query result set. This requires that the
result set structure contains the entire result of the query, so
mysql_data_seek() may be used in conjunction only with
mysql_store_result(), not with mysql_use_result().
The offset should be a value in the range from 0 to
mysql_num_rows(result)-1.
None.
None.
mysql_debug()
void mysql_debug(const char *debug)
Does a DBUG_PUSH with the given string. mysql_debug() uses the
Fred Fish debug library. To use this function, you must compile the client
library to support debugging.
See section E.1 Debugging a MySQL server. See section E.2 Debugging a MySQL client.
None.
None.
The call shown here causes the client library to generate a trace file in `/tmp/client.trace' on the client machine:
mysql_debug("d:t:O,/tmp/client.trace");
mysql_drop_db()
int mysql_drop_db(MYSQL *mysql, const char *db)
Drops the database named by the db parameter.
This function is deprecated. It is preferable to use mysql_query()
to issue a SQL DROP DATABASE statement instead.
Zero if the database was dropped successfully. Non-zero if an error occurred.
CR_COMMANDS_OUT_OF_SYNC
CR_SERVER_GONE_ERROR
CR_SERVER_LOST
CR_UNKNOWN_ERROR
if(mysql_drop_db(&mysql, "my_database"))
fprintf(stderr, "Failed to drop the database: Error: %s\n",
mysql_error(&mysql));
mysql_dump_debug_info()
int mysql_dump_debug_info(MYSQL *mysql)
Instructs the server to write some debug information to the log. For
this to work, the connected user must have the SUPER privilege.
Zero if the command was successful. Non-zero if an error occurred.
CR_COMMANDS_OUT_OF_SYNC
CR_SERVER_GONE_ERROR
CR_SERVER_LOST
CR_UNKNOWN_ERROR
mysql_eof()
my_bool mysql_eof(MYSQL_RES *result)
This function is deprecated. mysql_errno() or mysql_error()
may be used instead.
mysql_eof() determines whether the last row of a result
set has been read.
If you acquire a result set from a successful call to
mysql_store_result(), the client receives the entire set in one
operation. In this case, a NULL return from
mysql_fetch_row() always means the end of the result set has been
reached and it is unnecessary to call mysql_eof(). When used
with mysql_store_result(), mysql_eof() will always return
true.
On the other hand, if you use mysql_use_result() to initiate a result
set retrieval, the rows of the set are obtained from the server one by one as
you call mysql_fetch_row() repeatedly. Because an error may occur on
the connection during this process, a NULL return value from
mysql_fetch_row() does not necessarily mean the end of the result set
was reached normally. In this case, you can use mysql_eof() to
determine what happened. mysql_eof() returns a non-zero value if the
end of the result set was reached and zero if an error occurred.
Historically, mysql_eof() predates the standard MySQL error
functions mysql_errno() and mysql_error(). Because those error
functions provide the same information, their use is preferred over
mysql_eof(), which is now deprecated. (In fact, they provide more
information, because mysql_eof() returns only a boolean value whereas
the error functions indicate a reason for the error when one occurs.)
Zero if no error occurred. Non-zero if the end of the result set has been reached.
None.
The following example shows how you might use mysql_eof():
mysql_query(&mysql,"SELECT * FROM some_table");
result = mysql_use_result(&mysql);
while((row = mysql_fetch_row(result)))
{
// do something with data
}
if(!mysql_eof(result)) // mysql_fetch_row() failed due to an error
{
fprintf(stderr, "Error: %s\n", mysql_error(&mysql));
}
However, you can achieve the same effect with the standard MySQL error functions:
mysql_query(&mysql,"SELECT * FROM some_table");
result = mysql_use_result(&mysql);
while((row = mysql_fetch_row(result)))
{
// do something with data
}
if(mysql_errno(&mysql)) // mysql_fetch_row() failed due to an error
{
fprintf(stderr, "Error: %s\n", mysql_error(&mysql));
}
mysql_errno()
unsigned int mysql_errno(MYSQL *mysql)
For the connection specified by mysql, mysql_errno() returns
the error code for the most recently invoked API function that can succeed
or fail. A return value of zero means that no error occurred. Client error
message numbers are listed in the MySQL `errmsg.h' header file.
Server error message numbers are listed in `mysqld_error.h'. In the
MySQL source distribution you can find a complete list of
error messages and error numbers in the file `Docs/mysqld_error.txt'.
An error code value. Zero if no error occurred.
None.
mysql_error()
char *mysql_error(MYSQL *mysql)
For the connection specified by mysql, mysql_error() returns
the error message for the most recently invoked API function that can succeed
or fail. An empty string ("") is returned if no error occurred.
This means the following two tests are equivalent:
if(mysql_errno(&mysql))
{
// an error occurred
}
if(mysql_error(&mysql)[0] != '\0')
{
// an error occurred
}
The language of the client error messages may be changed by recompiling the MySQL client library. Currently you can choose error messages in several different languages. See section 4.6.2 Non-English Error Messages.
A character string that describes the error. An empty string if no error occurred.
None.
mysql_escape_string()
You should use mysql_real_escape_string() instead!
This function is identical to mysql_real_escape_string() except
that mysql_real_escape_string() takes a connection handler as
its first argument and escapes the string according to the current
character set. mysql_escape_string() does not take a connection
argument and does not respect the current charset setting.
mysql_fetch_field()
MYSQL_FIELD *mysql_fetch_field(MYSQL_RES *result)
Returns the definition of one column of a result set as a MYSQL_FIELD
structure. Call this function repeatedly to retrieve information about all
columns in the result set. mysql_fetch_field() returns NULL
when no more fields are left.
mysql_fetch_field() is reset to return information about the first
field each time you execute a new SELECT query. The field returned by
mysql_fetch_field() is also affected by calls to
mysql_field_seek().
If you've called mysql_query() to perform a SELECT on a table
but have not called mysql_store_result(), MySQL returns the
default blob length (8K bytes) if you call mysql_fetch_field() to ask
for the length of a BLOB field. (The 8K size is chosen because
MySQL doesn't know the maximum length for the BLOB. This
should be made configurable sometime.) Once you've retrieved the result set,
field->max_length contains the length of the largest value for this
column in the specific query.
The MYSQL_FIELD structure for the current column. NULL
if no columns are left.
None.
MYSQL_FIELD *field;
while((field = mysql_fetch_field(result)))
{
printf("field name %s\n", field->name);
}
mysql_fetch_fields()
MYSQL_FIELD *mysql_fetch_fields(MYSQL_RES *result)
Returns an array of all MYSQL_FIELD structures for a result set.
Each structure provides the field definition for one column of the result
set.
An array of MYSQL_FIELD structures for all columns of a result set.
None.
unsigned int num_fields;
unsigned int i;
MYSQL_FIELD *fields;
num_fields = mysql_num_fields(result);
fields = mysql_fetch_fields(result);
for(i = 0; i < num_fields; i++)
{
printf("Field %u is %s\n", i, fields[i].name);
}
mysql_fetch_field_direct()
MYSQL_FIELD *mysql_fetch_field_direct(MYSQL_RES *result, unsigned int fieldnr)
Given a field number fieldnr for a column within a result set, returns
that column's field definition as a MYSQL_FIELD structure. You may use
this function to retrieve the definition for an arbitrary column. The value
of fieldnr should be in the range from 0 to
mysql_num_fields(result)-1.
The MYSQL_FIELD structure for the specified column.
None.
unsigned int num_fields;
unsigned int i;
MYSQL_FIELD *field;
num_fields = mysql_num_fields(result);
for(i = 0; i < num_fields; i++)
{
field = mysql_fetch_field_direct(result, i);
printf("Field %u is %s\n", i, field->name);
}
mysql_fetch_lengths()
unsigned long *mysql_fetch_lengths(MYSQL_RES *result)
Returns the lengths of the columns of the current row within a result set.
If you plan to copy field values, this length information is also useful for
optimisation, because you can avoid calling strlen(). In addition, if
the result set contains binary data, you must use this function to
determine the size of the data, because strlen() returns incorrect
results for any field containing null characters.
The length for empty columns and for columns containing NULL values is
zero. To see how to distinguish these two cases, see the description for
mysql_fetch_row().
An array of unsigned long integers representing the size of each column (not
including any terminating null characters).
NULL if an error occurred.
mysql_fetch_lengths() is valid only for the current row of the result
set. It returns NULL if you call it before calling
mysql_fetch_row() or after retrieving all rows in the result.
MYSQL_ROW row;
unsigned long *lengths;
unsigned int num_fields;
unsigned int i;
row = mysql_fetch_row(result);
if (row)
{
num_fields = mysql_num_fields(result);
lengths = mysql_fetch_lengths(result);
for(i = 0; i < num_fields; i++)
{
printf("Column %u is %lu bytes in length.\n", i, lengths[i]);
}
}
mysql_fetch_row()
MYSQL_ROW mysql_fetch_row(MYSQL_RES *result)
Retrieves the next row of a result set. When used after
mysql_store_result(), mysql_fetch_row() returns NULL
when there are no more rows to retrieve. When used after
mysql_use_result(), mysql_fetch_row() returns NULL when
there are no more rows to retrieve or if an error occurred.
The number of values in the row is given by mysql_num_fields(result).
If row holds the return value from a call to mysql_fetch_row(),
pointers to the values are accessed as row[0] to
row[mysql_num_fields(result)-1]. NULL values in the row are
indicated by NULL pointers.
The lengths of the field values in the row may be obtained by calling
mysql_fetch_lengths(). Empty fields and fields containing
NULL both have length 0; you can distinguish these by checking
the pointer for the field value. If the pointer is NULL, the field
is NULL; otherwise, the field is empty.
A MYSQL_ROW structure for the next row. NULL if
there are no more rows to retrieve or if an error occurred.
CR_SERVER_LOST
CR_UNKNOWN_ERROR
MYSQL_ROW row;
unsigned int num_fields;
unsigned int i;
num_fields = mysql_num_fields(result);
while ((row = mysql_fetch_row(result)))
{
unsigned long *lengths;
lengths = mysql_fetch_lengths(result);
for(i = 0; i < num_fields; i++)
{
printf("[%.*s] ", (int) lengths[i], row[i] ? row[i] : "NULL");
}
printf("\n");
}
mysql_field_count()
unsigned int mysql_field_count(MYSQL *mysql)
If you are using a version of MySQL earlier than Version 3.22.24, you
should use unsigned int mysql_num_fields(MYSQL *mysql) instead.
Returns the number of columns for the most recent query on the connection.
The normal use of this function is when mysql_store_result()
returned NULL (and thus you have no result set pointer).
In this case, you can call mysql_field_count() to
determine whether mysql_store_result() should have produced a
non-empty result. This allows the client program to take proper action
without knowing whether the query was a SELECT (or
SELECT-like) statement. The example shown here illustrates how this
may be done.
See section 8.1.12.1 Why Is It that After mysql_query() Returns Success, mysql_store_result() Sometimes Returns NULL?.
An unsigned integer representing the number of fields in a result set.
None.
MYSQL_RES *result;
unsigned int num_fields;
unsigned int num_rows;
if (mysql_query(&mysql,query_string))
{
// error
}
else // query succeeded, process any data returned by it
{
result = mysql_store_result(&mysql);
if (result) // there are rows
{
num_fields = mysql_num_fields(result);
// retrieve rows, then call mysql_free_result(result)
}
else // mysql_store_result() returned nothing; should it have?
{
if(mysql_field_count(&mysql) == 0)
{
// query does not return data
// (it was not a SELECT)
num_rows = mysql_affected_rows(&mysql);
}
else // mysql_store_result() should have returned data
{
fprintf(stderr, "Error: %s\n", mysql_error(&mysql));
}
}
}
An alternative is to replace the mysql_field_count(&mysql) call with
mysql_errno(&mysql). In this case, you are checking directly for an
error from mysql_store_result() rather than inferring from the value
of mysql_field_count() whether the statement was a
SELECT.
mysql_field_seek()
MYSQL_FIELD_OFFSET mysql_field_seek(MYSQL_RES *result, MYSQL_FIELD_OFFSET offset)
Sets the field cursor to the given offset. The next call to
mysql_fetch_field() will retrieve the field definition of the column
associated with that offset.
To seek to the beginning of a row, pass an offset value of zero.
The previous value of the field cursor.
None.
mysql_field_tell()
MYSQL_FIELD_OFFSET mysql_field_tell(MYSQL_RES *result)
Returns the position of the field cursor used for the last
mysql_fetch_field(). This value can be used as an argument to
mysql_field_seek().
The current offset of the field cursor.
None.
mysql_free_result()
void mysql_free_result(MYSQL_RES *result)
Frees the memory allocated for a result set by mysql_store_result(),
mysql_use_result(), mysql_list_dbs(), etc. When you are done
with a result set, you must free the memory it uses by calling
mysql_free_result().
None.
None.
mysql_get_client_info()
char *mysql_get_client_info(void)
Returns a string that represents the client library version.
A character string that represents the MySQL client library version.
None.
mysql_get_server_version()
unsigned long mysql_get_server_version(MYSQL *mysql)
Returns version number of server as an integer (new in 4.1).
A number that represents the MySQL server version in format:
main_version*10000 + minor_version *100 + sub_version
For example, 4.1.0 is returned as 40100.
This is useful to quickly determine the version of the server in a client program to know if some capability exits.
None.
mysql_get_host_info()
char *mysql_get_host_info(MYSQL *mysql)
Returns a string describing the type of connection in use, including the server host name.
A character string representing the server host name and the connection type.
None.
mysql_get_proto_info()
unsigned int mysql_get_proto_info(MYSQL *mysql)
Returns the protocol version used by current connection.
An unsigned integer representing the protocol version used by the current connection.
None.
mysql_get_server_info()
char *mysql_get_server_info(MYSQL *mysql)
Returns a string that represents the server version number.
A character string that represents the server version number.
None.
mysql_info()
char *mysql_info(MYSQL *mysql)
Retrieves a string providing information about the most recently executed
query, but only for the statements listed here. For other statements,
mysql_info() returns NULL. The format of the string varies
depending on the type of query, as described here. The numbers are
illustrative only; the string will contain values appropriate for the query.
INSERT INTO ... SELECT ...
Records: 100 Duplicates: 0 Warnings: 0
INSERT INTO ... VALUES (...),(...),(...)...
Records: 3 Duplicates: 0 Warnings: 0
LOAD DATA INFILE ...
Records: 1 Deleted: 0 Skipped: 0 Warnings: 0
ALTER TABLE
Records: 3 Duplicates: 0 Warnings: 0
UPDATE
Rows matched: 40 Changed: 40 Warnings: 0
Note that mysql_info() returns a non-NULL value for the
INSERT ... VALUES statement only if multiple value lists are
specified in the statement.
A character string representing additional information about the most
recently executed query. NULL if no information is available for the
query.
None.
mysql_init()
MYSQL *mysql_init(MYSQL *mysql)
Allocates or initialises a MYSQL object suitable for
mysql_real_connect(). If mysql is a NULL pointer, the
function allocates, initialises, and returns a new object. Otherwise, the
object is initialised and the address of the object is returned. If
mysql_init() allocates a new object, it will be freed when
mysql_close() is called to close the connection.
An initialised MYSQL* handle. NULL if there was
insufficient memory to allocate a new object.
In case of insufficient memory, NULL is returned.
mysql_insert_id()
my_ulonglong mysql_insert_id(MYSQL *mysql)
Returns the ID generated for an AUTO_INCREMENT column by the previous
query. Use this function after you have performed an INSERT query
into a table that contains an AUTO_INCREMENT field.
Note that mysql_insert_id() returns 0 if the previous query
does not generate an AUTO_INCREMENT value. If you need to save
the value for later, be sure to call mysql_insert_id() immediately
after the query that generates the value.
mysql_insert_id() is updated after INSERT and
UPDATE statements that generate an AUTO_INCREMENT value or
that set a column value to LAST_INSERT_ID(expr).
See section 6.3.6.2 Miscellaneous Functions.
Also note that the value of the SQL LAST_INSERT_ID() function always
contains the most recently generated AUTO_INCREMENT value, and is
not reset between queries because the value of that function is maintained
in the server.
The value of the AUTO_INCREMENT field that was updated by the previous
query. Returns zero if there was no previous query on the connection or if
the query did not update an AUTO_INCREMENT value.
None.
mysql_kill()
int mysql_kill(MYSQL *mysql, unsigned long pid)
Asks the server to kill the thread specified by pid.
Zero for success. Non-zero if an error occurred.
CR_COMMANDS_OUT_OF_SYNC
CR_SERVER_GONE_ERROR
CR_SERVER_LOST
CR_UNKNOWN_ERROR
mysql_list_dbs()
MYSQL_RES *mysql_list_dbs(MYSQL *mysql, const char *wild)
Returns a result set consisting of database names on the server that match
the simple regular expression specified by the wild parameter.
wild may contain the wildcard characters `%' or `_', or may
be a NULL pointer to match all databases. Calling
mysql_list_dbs() is similar to executing the query SHOW
databases [LIKE wild].
You must free the result set with mysql_free_result().
A MYSQL_RES result set for success. NULL if an error occurred.
CR_COMMANDS_OUT_OF_SYNC
CR_OUT_OF_MEMORY
CR_SERVER_GONE_ERROR
CR_SERVER_LOST
CR_UNKNOWN_ERROR
mysql_list_fields()
MYSQL_RES *mysql_list_fields(MYSQL *mysql, const char *table, const char *wild)
Returns a result set consisting of field names in the given table that match
the simple regular expression specified by the wild parameter.
wild may contain the wildcard characters `%' or `_', or may
be a NULL pointer to match all fields. Calling
mysql_list_fields() is similar to executing the query SHOW
COLUMNS FROM tbl_name [LIKE wild].
Note that it's recommended that you use SHOW COLUMNS FROM tbl_name
instead of mysql_list_fields().
You must free the result set with mysql_free_result().
A MYSQL_RES result set for success. NULL if an error occurred.
CR_COMMANDS_OUT_OF_SYNC
CR_SERVER_GONE_ERROR
CR_SERVER_LOST
CR_UNKNOWN_ERROR
mysql_list_processes()
MYSQL_RES *mysql_list_processes(MYSQL *mysql)
Returns a result set describing the current server threads. This is the same
kind of information as that reported by mysqladmin processlist or
a SHOW PROCESSLIST query.
You must free the result set with mysql_free_result().
A MYSQL_RES result set for success. NULL if an error occurred.
CR_COMMANDS_OUT_OF_SYNC
CR_SERVER_GONE_ERROR
CR_SERVER_LOST
CR_UNKNOWN_ERROR
mysql_list_tables()
MYSQL_RES *mysql_list_tables(MYSQL *mysql, const char *wild)
Returns a result set consisting of table names in the current database that
match the simple regular expression specified by the wild parameter.
wild may contain the wildcard characters `%' or `_', or may
be a NULL pointer to match all tables. Calling
mysql_list_tables() is similar to executing the query SHOW
tables [LIKE wild].
You must free the result set with mysql_free_result().
A MYSQL_RES result set for success. NULL if an error occurred.
CR_COMMANDS_OUT_OF_SYNC
CR_SERVER_GONE_ERROR
CR_SERVER_LOST
CR_UNKNOWN_ERROR
mysql_num_fields()
unsigned int mysql_num_fields(MYSQL_RES *result)
or
unsigned int mysql_num_fields(MYSQL *mysql)
The second form doesn't work on MySQL Version 3.22.24 or newer. To pass a
MYSQL* argument, you must use
unsigned int mysql_field_count(MYSQL *mysql) instead.
Returns the number of columns in a result set.
Note that you can get the number of columns either from a pointer to a result
set or to a connection handle. You would use the connection handle if
mysql_store_result() or mysql_use_result() returned
NULL (and thus you have no result set pointer). In this case, you can
call mysql_field_count() to determine whether
mysql_store_result() should have produced a non-empty result. This
allows the client program to take proper action without knowing whether or
not the query was a SELECT (or SELECT-like) statement. The
example shown here illustrates how this may be done.
See section 8.1.12.1 Why Is It that After mysql_query() Returns Success, mysql_store_result() Sometimes Returns NULL?.
An unsigned integer representing the number of fields in a result set.
None.
MYSQL_RES *result;
unsigned int num_fields;
unsigned int num_rows;
if (mysql_query(&mysql,query_string))
{
// error
}
else // query succeeded, process any data returned by it
{
result = mysql_store_result(&mysql);
if (result) // there are rows
{
num_fields = mysql_num_fields(result);
// retrieve rows, then call mysql_free_result(result)
}
else // mysql_store_result() returned nothing; should it have?
{
if (mysql_errno(&mysql))
{
fprintf(stderr, "Error: %s\n", mysql_error(&mysql));
}
else if (mysql_field_count(&mysql) == 0)
{
// query does not return data
// (it was not a SELECT)
num_rows = mysql_affected_rows(&mysql);
}
}
}
An alternative (if you know that your query should have returned a result set)
is to replace the mysql_errno(&mysql) call with a check if
mysql_field_count(&mysql) is = 0. This will only happen if something
went wrong.
mysql_num_rows()
my_ulonglong mysql_num_rows(MYSQL_RES *result)
Returns the number of rows in the result set.
The use of mysql_num_rows() depends on whether you use
mysql_store_result() or mysql_use_result() to return the result
set. If you use mysql_store_result(), mysql_num_rows() may be
called immediately. If you use mysql_use_result(),
mysql_num_rows() will not return the correct value until all the rows
in the result set have been retrieved.
The number of rows in the result set.
None.
mysql_options()
int mysql_options(MYSQL *mysql, enum mysql_option option, const char *arg)
Can be used to set extra connect options and affect behaviour for a connection. This function may be called multiple times to set several options.
mysql_options() should be called after mysql_init() and before
mysql_connect() or mysql_real_connect().
The option argument is the option that you want to set; the arg
argument is the value for the option. If the option is an integer, then
arg should point to the value of the integer.
Possible options values:
| Option | Argument type | Function |
MYSQL_OPT_CONNECT_TIMEOUT | unsigned int * | Connect timeout in seconds. |
MYSQL_OPT_COMPRESS | Not used | Use the compressed client/server protocol. |
MYSQL_OPT_LOCAL_INFILE | optional pointer to uint | If no pointer is given or if pointer points to an unsigned int != 0 the command LOAD LOCAL INFILE is enabled.
|
MYSQL_OPT_NAMED_PIPE | Not used | Use named pipes to connect to a MySQL server on NT. |
MYSQL_INIT_COMMAND | char * | Command to execute when connecting to the MySQL server. Will automatically be re-executed when reconnecting. |
MYSQL_READ_DEFAULT_FILE | char * | Read options from the named option file instead of from `my.cnf'. |
MYSQL_READ_DEFAULT_GROUP | char * | Read options from the named group from `my.cnf' or the file specified with MYSQL_READ_DEFAULT_FILE.
|
Note that the group client is always read if you use
MYSQL_READ_DEFAULT_FILE or MYSQL_READ_DEFAULT_GROUP.
The specified group in the option file may contain the following options:
| Option | Description |
connect-timeout | Connect timeout in seconds. On Linux this timeout is also used for waiting for the first answer from the server. |
compress | Use the compressed client/server protocol. |
database | Connect to this database if no database was specified in the connect command. |
debug | Debug options. |
disable-local-infile | Disable use of LOAD DATA LOCAL.
|
host | Default host name. |
init-command | Command to execute when connecting to MySQL server. Will automatically be re-executed when reconnecting. |
interactive-timeout | Same as specifying CLIENT_INTERACTIVE to mysql_real_connect(). See section 8.1.3.175 mysql_real_connect().
|
local-infile[=(0|1)] | If no argument or argument != 0 then enable use of LOAD DATA LOCAL.
|
max_allowed_packet | Max size of packet client can read from server. |
password | Default password. |
pipe | Use named pipes to connect to a MySQL server on NT. |
protocol=(TCP | SOCKET | PIPE | MEMORY) | Which protocol to use when connecting to server (New in 4.1) |
port | Default port number. |
return-found-rows | Tell mysql_info() to return found rows instead of updated rows when using UPDATE.
|
shared-memory-base-name=name | Shared memory name to use to connect to server (default is "MySQL"). New in MySQL 4.1. |
socket | Default socket number. |
user | Default user. |
Note that timeout has been replaced by connect-timeout, but
timeout will still work for a while.
For more information about option files, see section 4.1.2 `my.cnf' Option Files.
Zero for success. Non-zero if you used an unknown option.
MYSQL mysql;
mysql_init(&mysql);
mysql_options(&mysql,MYSQL_OPT_COMPRESS,0);
mysql_options(&mysql,MYSQL_READ_DEFAULT_GROUP,"odbc");
if (!mysql_real_connect(&mysql,"host","user","passwd","database",0,NULL,0))
{
fprintf(stderr, "Failed to connect to database: Error: %s\n",
mysql_error(&mysql));
}
The above requests the client to use the compressed client/server protocol and
read the additional options from the odbc section in the `my.cnf'
file.
mysql_ping()
int mysql_ping(MYSQL *mysql)
Checks whether the connection to the server is working. If it has gone down, an automatic reconnection is attempted.
This function can be used by clients that remain idle for a long while, to check whether the server has closed the connection and reconnect if necessary.
Zero if the server is alive. Non-zero if an error occurred.
CR_COMMANDS_OUT_OF_SYNC
CR_SERVER_GONE_ERROR
CR_UNKNOWN_ERROR
mysql_query()
int mysql_query(MYSQL *mysql, const char *query)
Executes the SQL query pointed to by the null-terminated string query.
The query must consist of a single SQL statement. You should not add
a terminating semicolon (`;') or \g to the statement.
mysql_query() cannot be used for queries that contain binary data; you
should use mysql_real_query() instead. (Binary data may contain the
`\0' character, which mysql_query() interprets as the end of the
query string.)
If you want to know if the query should return a result set or not, you can
use mysql_field_count() to check for this.
See section 8.1.3.85 mysql_field_count().
Zero if the query was successful. Non-zero if an error occurred.
CR_COMMANDS_OUT_OF_SYNC
CR_SERVER_GONE_ERROR
CR_SERVER_LOST
CR_UNKNOWN_ERROR
mysql_real_connect()
MYSQL *mysql_real_connect(MYSQL *mysql, const char *host,
const char *user, const char *passwd, const char *db,
unsigned int port, const char *unix_socket,
unsigned long client_flag)
mysql_real_connect() attempts to establish a connection to a
MySQL database engine running on host.
mysql_real_connect() must complete successfully before you can execute
any of the other API functions, with the exception of
mysql_get_client_info().
The parameters are specified as follows:
MYSQL
structure. Before calling mysql_real_connect() you must call
mysql_init() to initialise the MYSQL structure. You can
change a lot of connect options with the mysql_options()
call. See section 8.1.3.163 mysql_options().
host may be either a hostname or an IP address. If
host is NULL or the string "localhost", a connection to
the local host is assumed. If the OS supports sockets (Unix) or named pipes
(Windows), they are used instead of TCP/IP to connect to the server.
user parameter contains the user's MySQL login ID. If
user is NULL, the current user is assumed. Under Unix, this is
the current login name. Under Windows ODBC, the current user name must be
specified explicitly.
See section 8.2.2 How to Fill in the Various Fields in the ODBC Administrator Program.
passwd parameter contains the password for user. If
passwd is NULL, only entries in the user table for the
user that have a blank (empty) password field will be checked for a match. This
allows the database administrator to set up the MySQL privilege
system in such a way that users get different privileges depending on whether
or not they have specified a password.
Note: Do not attempt to encrypt the password before calling
mysql_real_connect(); password encryption is handled automatically by
the client API.
db is the database name.
If db is not NULL, the connection will set the default
database to this value.
port is not 0, the value will be used as the port number
for the TCP/IP connection. Note that the host parameter
determines the type of the connection.
unix_socket is not NULL, the string specifies the
socket or named pipe that should be used. Note that the host
parameter determines the type of the connection.
client_flag is usually 0, but can be set to a combination
of the following flags in very special circumstances:
| Flag name | Flag description |
CLIENT_COMPRESS | Use compression protocol. |
CLIENT_FOUND_ROWS | Return the number of found (matched) rows, not the number of affected rows. |
CLIENT_IGNORE_SPACE | Allow spaces after function names. Makes all functions names reserved words. |
CLIENT_INTERACTIVE | Allow interactive_timeout seconds (instead of wait_timeout seconds) of inactivity before closing the connection.
|
CLIENT_LOCAL_FILES | Enable LOAD DATA LOCAL handling.
|
CLIENT_MULTI_QUERIES | Tell the server that the client may send multi-row-queries (separated with `;'). If this flag is not set, multi-row-queries are disabled. New in 4.1. |
CLIENT_MULTI_RESULTS | Tell the server that the client can handle multiple-result sets from multi-queries or stored procedures. This is automatically set if CLIENT_MULTI_QUERIES is set. New in 4.1.
|
CLIENT_NO_SCHEMA | Don't allow the db_name.tbl_name.col_name syntax. This is for ODBC. It causes the parser to generate an error if you use that syntax, which is useful for trapping bugs in some ODBC programs.
|
CLIENT_ODBC | The client is an ODBC client. This changes mysqld to be more ODBC-friendly.
|
CLIENT_SSL | Use SSL (encrypted protocol). |
A MYSQL* connection handle if the connection was successful,
NULL if the connection was unsuccessful. For a successful connection,
the return value is the same as the value of the first parameter.
CR_CONN_HOST_ERROR
CR_CONNECTION_ERROR
CR_IPSOCK_ERROR
CR_OUT_OF_MEMORY
CR_SOCKET_CREATE_ERROR
CR_UNKNOWN_HOST
CR_VERSION_ERROR
--old-protocol option.
CR_NAMEDPIPEOPEN_ERROR
CR_NAMEDPIPEWAIT_ERROR
CR_NAMEDPIPESETSTATE_ERROR
CR_SERVER_LOST
connect_timeout > 0 and it took longer then connect_timeout
seconds to connect to the server or if the server died while executing the
init-command.
MYSQL mysql;
mysql_init(&mysql);
mysql_options(&mysql,MYSQL_READ_DEFAULT_GROUP,"your_prog_name");
if (!mysql_real_connect(&mysql,"host","user","passwd","database",0,NULL,0))
{
fprintf(stderr, "Failed to connect to database: Error: %s\n",
mysql_error(&mysql));
}
By using mysql_options() the MySQL library will read the
[client] and [your_prog_name] sections in the `my.cnf'
file which will ensure that your program will work, even if someone has
set up MySQL in some non-standard way.
Note that upon connection, mysql_real_connect() sets the reconnect
flag (part of the MYSQL structure) to a value of 1. This
flag indicates, in the event that a query cannot be performed because
of a lost connection, to try reconnecting to the server before giving up.
mysql_real_escape_string()
unsigned long mysql_real_escape_string(MYSQL *mysql, char *to, const char *from, unsigned long length)
This function is used to create a legal SQL string that you can use in a SQL statement. See section 6.1.1.1 Strings.
The string in from is encoded to an escaped SQL string, taking
into account the current character set of the connection. The result is placed
in to and a terminating null byte is appended. Characters
encoded are NUL (ASCII 0), `\n', `\r', `\',
`'', `"', and Control-Z (see section 6.1.1 Literals: How to Write Strings and Numbers).
(Strictly speaking, MySQL requires only that backslash and the quote
character used to quote the string in the query be escaped. This function
quotes the other characters to make them easier to read in log files.)
The string pointed to by from must be length bytes long. You
must allocate the to buffer to be at least length*2+1 bytes
long. (In the worst case, each character may need to be encoded as using two
bytes, and you need room for the terminating null byte.) When
mysql_real_escape_string() returns, the contents of to will be a
null-terminated string. The return value is the length of the encoded
string, not including the terminating null character.
char query[1000],*end;
end = strmov(query,"INSERT INTO test_table values(");
*end++ = '\'';
end += mysql_real_escape_string(&mysql, end,"What's this",11);
*end++ = '\'';
*end++ = ',';
*end++ = '\'';
end += mysql_real_escape_string(&mysql, end,"binary data: \0\r\n",16);
*end++ = '\'';
*end++ = ')';
if (mysql_real_query(&mysql,query,(unsigned int) (end - query)))
{
fprintf(stderr, "Failed to insert row, Error: %s\n",
mysql_error(&mysql));
}
The strmov() function used in the example is included in the
mysqlclient library and works like strcpy() but returns a
pointer to the terminating null of the first parameter.
The length of the value placed into to, not including the
terminating null character.
None.
mysql_real_query()
int mysql_real_query(MYSQL *mysql, const char *query, unsigned long length)
Executes the SQL query pointed to by query, which should be a string
length bytes long. The query must consist of a single SQL statement.
You should not add a terminating semicolon (`;') or \g to the
statement.
You must use mysql_real_query() rather than
mysql_query() for queries that contain binary data, because binary data
may contain the `\0' character. In addition, mysql_real_query()
is faster than mysql_query() because it does not call strlen() on
the query string.
If you want to know if the query should return a result set or not, you can
use mysql_field_count() to check for this.
See section 8.1.3.85 mysql_field_count().
Zero if the query was successful. Non-zero if an error occurred.
CR_COMMANDS_OUT_OF_SYNC
CR_SERVER_GONE_ERROR
CR_SERVER_LOST
CR_UNKNOWN_ERROR
mysql_reload()
int mysql_reload(MYSQL *mysql)
Asks the MySQL server to reload the grant tables. The
connected user must have the RELOAD privilege.
This function is deprecated. It is preferable to use mysql_query()
to issue a SQL FLUSH PRIVILEGES statement instead.
Zero for success. Non-zero if an error occurred.
CR_COMMANDS_OUT_OF_SYNC
CR_SERVER_GONE_ERROR
CR_SERVER_LOST
CR_UNKNOWN_ERROR
mysql_row_seek()
MYSQL_ROW_OFFSET mysql_row_seek(MYSQL_RES *result, MYSQL_ROW_OFFSET offset)
Sets the row cursor to an arbitrary row in a query result set. This requires
that the result set structure contains the entire result of the query, so
mysql_row_seek() may be used in conjunction only with
mysql_store_result(), not with mysql_use_result().
The offset should be a value returned from a call to mysql_row_tell()
or to mysql_row_seek(). This value is not simply a row number; if you
want to seek to a row within a result set using a row number, use
mysql_data_seek() instead.
The previous value of the row cursor. This value may be passed to a
subsequent call to mysql_row_seek().
None.
mysql_row_tell()
MYSQL_ROW_OFFSET mysql_row_tell(MYSQL_RES *result)
Returns the current position of the row cursor for the last
mysql_fetch_row(). This value can be used as an argument to
mysql_row_seek().
You should use mysql_row_tell() only after mysql_store_result(),
not after mysql_use_result().
The current offset of the row cursor.
None.
mysql_select_db()
int mysql_select_db(MYSQL *mysql, const char *db)
Causes the database specified by db to become the default (current)
database on the connection specified by mysql. In subsequent queries,
this database is the default for table references that do not include an
explicit database specifier.
mysql_select_db() fails unless the connected user can be authenticated
as having permission to use the database.
Zero for success. Non-zero if an error occurred.
CR_COMMANDS_OUT_OF_SYNC
CR_SERVER_GONE_ERROR
CR_SERVER_LOST
CR_UNKNOWN_ERROR
mysql_sqlstate()
const char *mysql_sqlstate(MYSQL *mysql)
Returns the SQLSTATE error code for the last error.
Note that not all MySQL errors are yet mapped to SQLSTATE's. For not mapped errors we return "HY000" (General error).
This function was added to MySQL 4.1.1.
SQLSTATE is a 6 digit character string that is specified by ANSI SQL and ODBC.
See section 8.1.3.51 mysql_errno().
See section 8.1.3.55 mysql_error().
See section 8.1.7.79 mysql_stmt_sqlstate().
mysql_shutdown()
int mysql_shutdown(MYSQL *mysql)
Asks the database server to shut down. The connected user must have
SHUTDOWN privileges.
Zero for success. Non-zero if an error occurred.
CR_COMMANDS_OUT_OF_SYNC
CR_SERVER_GONE_ERROR
CR_SERVER_LOST
CR_UNKNOWN_ERROR
mysql_stat()
char *mysql_stat(MYSQL *mysql)
Returns a character string containing information similar to that provided by
the mysqladmin status command. This includes uptime in seconds and
the number of running threads, questions, reloads, and open tables.
A character string describing the server status. NULL if an
error occurred.
CR_COMMANDS_OUT_OF_SYNC
CR_SERVER_GONE_ERROR
CR_SERVER_LOST
CR_UNKNOWN_ERROR
mysql_store_result()
MYSQL_RES *mysql_store_result(MYSQL *mysql)
You must call mysql_store_result() or mysql_use_result()
for every query that successfully retrieves data (SELECT,
SHOW, DESCRIBE, EXPLAIN).
You don't have to call mysql_store_result() or
mysql_use_result() for other queries, but it will not do any
harm or cause any notable performance if you call mysql_store_result()
in all cases. You can detect if the query didn't have a result set by
checking if mysql_store_result() returns 0 (more about this later one).
If you want to know if the query should return a result set or not, you can
use mysql_field_count() to check for this.
See section 8.1.3.85 mysql_field_count().
mysql_store_result() reads the entire result of a query to the client,
allocates a MYSQL_RES structure, and places the result into this
structure.
mysql_store_result() returns a null pointer if the query didn't return
a result set (if the query was, for example, an INSERT statement).
mysql_store_result() also returns a null pointer if reading of the
result set failed. You can check if you got an error by checking if
mysql_error() doesn't return a null pointer, if
mysql_errno() returns <> 0, or if mysql_field_count()
returns <> 0.
An empty result set is returned if there are no rows returned. (An empty result set differs from a null pointer as a return value.)
Once you have called mysql_store_result() and got a result back
that isn't a null pointer, you may call mysql_num_rows() to find
out how many rows are in the result set.
You can call mysql_fetch_row() to fetch rows from the result set,
or mysql_row_seek() and mysql_row_tell() to obtain or
set the current row position within the result set.
You must call mysql_free_result() once you are done with the result
set.
See section 8.1.12.1 Why Is It that After mysql_query() Returns Success, mysql_store_result() Sometimes Returns NULL?.
A MYSQL_RES result structure with the results. NULL if
an error occurred.
CR_COMMANDS_OUT_OF_SYNC
CR_OUT_OF_MEMORY
CR_SERVER_GONE_ERROR
CR_SERVER_LOST
CR_UNKNOWN_ERROR
mysql_thread_id()
unsigned long mysql_thread_id(MYSQL *mysql)
Returns the thread ID of the current connection. This value can be used as
an argument to mysql_kill() to kill the thread.
If the connection is lost and you reconnect with mysql_ping(), the
thread ID will change. This means you should not get the thread ID and store
it for later. You should get it when you need it.
The thread ID of the current connection.
None.
mysql_use_result()
MYSQL_RES *mysql_use_result(MYSQL *mysql)
You must call mysql_store_result() or mysql_use_result() for
every query that successfully retrieves data (SELECT, SHOW,
DESCRIBE, EXPLAIN).
mysql_use_result() initiates a result set retrieval but does not
actually read the result set into the client like mysql_store_result()
does. Instead, each row must be retrieved individually by making calls to
mysql_fetch_row(). This reads the result of a query directly from the
server without storing it in a temporary table or local buffer, which is
somewhat faster and uses much less memory than mysql_store_result().
The client will only allocate memory for the current row and a communication
buffer that may grow up to max_allowed_packet bytes.
On the other hand, you shouldn't use mysql_use_result() if you are
doing a lot of processing for each row on the client side, or if the output
is sent to a screen on which the user may type a ^S (stop scroll).
This will tie up the server and prevent other threads from updating any
tables from which the data is being fetched.
When using mysql_use_result(), you must execute
mysql_fetch_row() until a NULL value is returned, otherwise, the
unfetched rows will be returned as part of the result set for your next
query. The C API will give the error Commands out of sync; you can't
run this command now if you forget to do this!
You may not use mysql_data_seek(), mysql_row_seek(),
mysql_row_tell(), mysql_num_rows(), or
mysql_affected_rows() with a result returned from
mysql_use_result(), nor may you issue other queries until the
mysql_use_result() has finished. (However, after you have fetched all
the rows, mysql_num_rows() will accurately return the number of rows
fetched.)
You must call mysql_free_result() once you are done with the result
set.
A MYSQL_RES result structure. NULL if an error occurred.
CR_COMMANDS_OUT_OF_SYNC
CR_OUT_OF_MEMORY
CR_SERVER_GONE_ERROR
CR_SERVER_LOST
CR_UNKNOWN_ERROR
mysql_commit()
my_bool mysql_commit(MYSQL *mysql)
Commits the current transaction. Available from MySQL 4.1
Zero if successful. Non-zero if an error occured.
None
mysql_rollback()
my_bool mysql_rollback(MYSQL *mysql)
Rollbacks the current transaction. Available from MySQL 4.1
Zero if successful. Non-zero if an error occured.
None.
mysql_autocommit()
my_bool mysql_autocommit(MYSQL *mysql, my_bool mode)
Sets autocommit mode on/off.
If the mode is 1, set autocommit mode to on; in case of 0, set to off.
Available from MySQL 4.1
Zero if successful. Non-zero if an error occured
None.
mysql_more_results()
my_bool mysql_more_results(MYSQL *mysql)
Returns true if more results exists from the currently executed query,
and the application must call mysql_next_result() to fetch the
results.
Available from MySQL 4.1
TRUE if more results exists. FALSE if no more results exists.
None.
mysql_next_result()
int mysql_next_result(MYSQL *mysql)
If more query results exists, then mysql_next_result() reads the
next query results and returns the status back to application.
Available from MySQL 4.1
Zero if successful. Non-zero if an error occured
None.
From MySQL 4.1, you can also make use of prepared statements using
the statement handler MYSQL_STMT, which supports multiple query
execution with input and output binding.
Prepared execution is an efficient way to execute a statement more than once. The statement is first parsed (prepared). Then it is executed one or more times at a later time, using the statement handle returned by the prepare function.
Another advantage of prepared statements is that it uses a binary protocol which makes the data tranfer between client and server more efficient.
Prepared execution is faster than direct execution for statements executed more than once, primarly because the query is parsed only once; In the case of direct execution, the query is parsed every time. Prepared execution also can provide a reduction of network traffic because during the execute call, it only sends the data for the parameters.
Prepared statements mainly uses the following two MYSQL_STMT and
MYSQL_BIND structures:
MYSQL_STMT
mysql_prepare().
One connection can have 'n' statement handles, and the limit depends up on
the system resources.
MYSQL_BIND
mysql_bind_param()) inorder to the parameters data to
mysql_execute() call; as well as to bind row
buffers(mysql_bind_result()) to fetch the result set data using
mysql_fetch().
The MYSQL_BIND structure contains the members listed here:
enum enum_field_types buffer_type [input]
type value must be one of the following:
MYSQL_TYPE_TINY
MYSQL_TYPE_SHORT
MYSQL_TYPE_LONG
MYSQL_TYPE_LONGLONG
MYSQL_TYPE_FLOAT
MYSQL_TYPE_DOUBLE
MYSQL_TYPE_TIME
MYSQL_TYPE_DATE
MYSQL_TYPE_DATETIME
MYSQL_TYPE_TIMESTAMP
MYSQL_TYPE_STRING
MYSQL_TYPE_VAR_STRING
MYSQL_TYPE_TINY_BLOB
MYSQL_TYPE_MEDIUM_BLOB
MYSQL_TYPE_LONG_BLOB
MYSQL_TYPE_BLOB
void *buffer [input/output]
unsigned long buffer_length [input]
*buffer in bytes. For character and binary C data,
the buffer_length specifies the length of the *buffer to be used
as a parameter data in case if it is used with mysql_bind_param()
or to return that many bytes when fetching results when this is used
with mysql_bind_result().
long *length [input/output]
mysql_execute() is called, contains the length
of the parameter value stored in *buffer. This is ignored except for
character or binary C data.
If the length is a null pointer, then the protocol assumes that all
character and binary data are null terminated.
When this structure is used in output binding, then mysql_fetch()
return the the length of the data that is returned.
bool *is_null [input/output]
MYSQL_TIME
MYSQL_TIME structure contains the members listed here:
| Member | Type | Description |
year | unsigned int | Year. |
month | unsigned int | Month of the year. |
day | unsigned int | Day of the month. |
hour | unsigned int | Hour of the day(TIME). |
minute | unsigned int | Minute of the hour. |
second | unsigned int | Second of the minute. |
neg | my_bool | A boolean flag to indicate if the time is negative. |
second_part | unsigned long | Fraction part of the second(not yet used) |
The functions available in the prepared statements are listed here and are described in greater detail in the later section. See section 8.1.7 C API Prepared Statement Function Descriptions.
| Function | Description |
| mysql_prepare() | Prepares an SQL string for execution. |
| mysql_param_count() | Returns the number of parameters in a prepared SQL statement. |
| mysql_prepare_result() | Returns prepared statement meta information in the form of resultset. |
| mysql_bind_param() | Binds a buffer to parameter markers in a prepared SQL statement. |
| mysql_execute() | Executes the prepared statement. |
| mysql_stmt_affected_rows() | Returns the number of rows changes/deleted/inserted by the last UPDATE, DELETE, or INSERT query. |
| mysql_bind_result() | Binds application data buffers to columns in the resultset. |
| mysql_stmt_store_result() | Retrieves the complete result set to the client. |
| mysql_stmt_data_seek() | Seeks to an arbitrary row in a statement result set. |
| mysql_stmt_row_seek() |
Seeks to a row in a statement result set, using value returned from
mysql_stmt_row_tell().
|
| mysql_stmt_row_tell() | Returns the statement row cursor position. |
| mysql_stmt_num_rows() | Returns total rows from the statement buffered result set. |
| mysql_stmt_sqlstate() | Returns the SQLSTATE error code for the last statment error. |
| mysql_fetch() | Fetches the next rowset of data from the result set and returns data for all bound columns. |
| mysql_stmt_close() | Frees memory used by prepared statement. |
| mysql_stmt_errno() | Returns the error number for the last statement execution. |
| mysql_stmt_error() | Returns the error message for the last statement execution. |
| mysql_send_long_data() | Sends long data in chunks to server. |
Call mysql_prepare() to prepare and initialize the statement
handle, then call mysql_bind_param() to supply the parameters
data, and then call mysql_execute() to execute the query. You can
repeat the mysql_execute() by changing parameter values from the
respective buffer supplied through mysql_bind_param().
In case if the query is a SELECT statement or any other query which
results in a resultset, then mysql_prepare() will also return the result
set meta data information in the form of MYSQL_RES result set
through mysql_prepare_result().
You can supply the result buffers using mysql_bind_result(), so
that the mysql_fetch() will automatically returns data to this
buffers. This is row by row fetching.
You can also send the text or binary data in chunks to server using
mysql_send_long_data(), by specifying the option is_long_data=1
or length=MYSQL_LONG_DATA or -2 in the MYSQL_BIND structure supplied
with mysql_bind_param().
Once the statement execution is over, it must be freed using
mysql_stmt_close so that it frees all the alloced resources for
the statement handle.
To prepare and execute a statement, the application:
You can get the statement error code and message using
mysql_stmt_errno() and mysql_stmt_error() respectively.
You need to use the following functions when you want to prepare and execute the queries.
mysql_prepare()
MYSQL_STMT * mysql_prepare(MYSQL *mysql, const char *query, unsigned
long length)
Prepares the SQL query pointed to by the null-terminated string 'query'. The query must consist of a single SQL statement. You should not add a terminating semicolon (`;`) or \g to the statement.
The application can include one or more parameter markers in the SQL
statement. To include a parameter marker, the appication embeds a
question mark (?) into the SQL string at the appropriate
position.
The markers are legal only in certain places in SQL statements. For example, they are not allowed in the select list(the list of columns to be returned by a SELECT statement), nor are they allowed as both operands of a binary operator such as the equal sign (=), because it would be impossible to determine the parameter type. In general, parameters are legal only in Data Manipulation Languange(DML) statements, and not in Data Defination Language(DDL) statements.
The parameter markers are then bound to application variables using
mysql_bind_param().
MYSQL_STMT if the prepare was successful. NULL if an error
occured.
CR_COMMANDS_OUT_OF_SYNC
CR_OUT_OF_MEMORY
CR_SERVER_GONE_ERROR
CR_SERVER_LOST
CR_UNKNOWN_ERROR
If the prepare is not successful, i.e. when mysql_prepare() returned a
NULL statement, errors can be obtained by calling mysql_error().
For the usage of mysql_prepare() refer to the Example from
section 8.1.7.21 mysql_execute().
mysql_param_count()
unsigned int mysql_param_count(MYSQL_STMT *stmt)
Returns the number of parameter markers present from the prepared query.
An unsigned integer representing the number of parameters in a statement.
None
For the usage of mysql_param_count() refer to the Example from
section 8.1.7.21 mysql_execute().
mysql_prepare_result()
MYSQL_RES *mysql_prepare_result(MYSQL_STMT *stmt)
If the mysql_prepare() resulted in a result set query, then
mysql_prepare_result() returns the result set meta data in the form of
MYSQL_RES structure; which can further be used to process the
meta information such as total number of fields and individual field
information. This resulted result set can be passed as an argument to
any of the field based APIs in order to process the result set meta data
information such as:
A MYSQL_RES result structure. NULL if no meta information exists from
the prepared query.
CR_OUT_OF_MEMORY
CR_UNKNOWN_ERROR
For the usage of mysql_prepare_result() refer to the Example from
section 8.1.7.56 mysql_fetch()
mysql_bind_param()
int mysql_bind_param(MYSQL_STMT *stmt, MYSQL_BIND *bind)
mysql_bind_param is used to bind data for the parameter markers
in the SQL statement from mysql_prepare. It uses the structure
MYSQL_BIND to supply the data.
The supported buffer types are:
Zero if the bind was successful. Non-zero if an error occured.
CR_NO_PREPARE_STMT
CR_NO_PARAMETERS_EXISTS
CR_INVALID_BUFFER_USE
CR_UNSUPPORTED_PARAM_TYPE
CR_OUT_OF_MEMOR
CR_UNKNOWN_ERROR
For the usage of mysql_bind_param() refer to the Example from
section 8.1.7.21 mysql_execute().
mysql_execute()
int mysql_execute(MYSQL_STMT *stmt.
mysql_execute() executes the prepared query associated with the
statement handle. The parameter marker values will be sent to server
during this call, so that server replaces markers with this newly
supplied data.
If the statement is UPDATE,DELETE,or INSERT, the total number of
changed/deletd/inserted values can be found by calling
mysql_stmt_affected_rows. If this is a result set query, then one
must call mysql_fetch() to fetch the data prior to calling any
other calls which results in query processing. For more information on
how to fetch the statement binary data, refer to section 8.1.7.56 mysql_fetch().
8.1.7.23 Return Values
mysql_execute() returns the following return values:
| Return Value | Description |
| 0 | Successful |
| 1 | Error occured. Error code and
message can be obtained by calling mysql_stmt_errno() and mysql_stmt_error().
|
CR_NO_PREPARE_QUERY
CR_ALL_PARAMS_NOT_BOUND
CR_COMMANDS_OUT_OF_SYNC
CR_OUT_OF_MEMORY
CR_SERVER_GONE_ERROR
CR_SERVER_LOST
CR_UNKNOWN_ERROR
The following example explains the uasage of mysql_prepare,
mysql_param_count, mysql_bind_param, mysql_execute
and mysql_stmt_affected_rows().
MYSQL_BIND bind[3];
MYSQL_STMT *stmt;
ulonglong affected_rows;
long length;
unsigned int param_count;
int int_data;
short small_data;
char str_data[50], query[255];
my_bool is_null;
/* Set autocommit mode to true */
mysql_autocommit(mysql, 1);
if (mysql_query(mysql,"DROP TABLE IF EXISTS test_table"))
{
fprintf(stderr, "\n drop table failed");
fprintf(stderr, "\n %s", mysql_error(mysql));
exit(0);
}
if (mysql_query(mysql,"CREATE TABLE test_table(col1 INT, col2 varchar(50), \
col3 smallint,\
col4 timestamp(14))"))
{
fprintf(stderr, "\n create table failed");
fprintf(stderr, "\n %s", mysql_error(mysql));
exit(0);
}
/* Prepare a insert query with 3 parameters */
strmov(query, "INSERT INTO test_table(col1,col2,col3) values(?,?,?)");
if(!(stmt = mysql_prepare(mysql, query, strlen(query))))
{
fprintf(stderr, "\n prepare, insert failed");
fprintf(stderr, "\n %s", mysql_error(mysql));
exit(0);
}
fprintf(stdout, "\n prepare, insert successful");
/* Get the parameter count from the statement */
param_count= mysql_param_count(stmt);
fprintf(stdout, "\n total parameters in insert: %d", param_count);
if (param_count != 3) /* validate parameter count */
{
fprintf(stderr, "\n invalid parameter count returned by MySQL");
exit(0);
}
/* Bind the data for the parameters */
/* INTEGER PART */
bind[0].buffer_type= MYSQL_TYPE_LONG;
bind[0].buffer= (char *)&int_data;
bind[0].is_null= 0;
bind[0].length= 0;
/* STRING PART */
bind[1].buffer_type= MYSQL_TYPE_VAR_STRING;
bind[1].buffer= (char *)str_data;
bind[1].buffer_length= sizeof(str_data);
bind[1].is_null= 0;
bind[1].length= 0;
/* SMALLINT PART */
bind[2].buffer_type= MYSQL_TYPE_SHORT;
bind[2].buffer= (char *)&small_data;
bind[2].is_null= &is_null;
bind[2].length= 0;
is_null= 0;
/* Bind the buffers */
if (mysql_bind_param(stmt, bind))
{
fprintf(stderr, "\n param bind failed");
fprintf(stderr, "\n %s", mysql_stmt_error(stmt));
exit(0);
}
/* Specify the data */
int_data= 10; /* integer */
strcpy(str_data,"MySQL"); /* string */
/* INSERT SMALLINT data as NULL */
is_null= 1;
/* Execute the insert statement - 1*/
if (mysql_execute(stmt))
{
fprintf(stderr, "\n execute 1 failed");
fprintf(stderr, "\n %s", mysql_stmt_error(stmt));
fprintf(stderr, "\n send a bug report to bugs@lists.mysql.com, by asking why this is not working ?");
exit(0);
}
/* Get the total rows affected */
affected_rows= mysql_stmt_affected_rows(stmt);
fprintf(stdout, "\n total affected rows: %lld", affected_rows);
if (affected_rows != 1) /* validate affected rows */
{
fprintf(stderr, "\n invalid affected rows by MySQL");
exit(0);
}
/* Re-execute the insert, by changing the values */
int_data= 1000;
strcpy(str_data,"The most popular open source database");
small_data= 1000; /* smallint */
is_null= 0; /* reset NULL */
/* Execute the insert statement - 2*/
if (mysql_execute(stmt))
{
fprintf(stderr, "\n execute 2 failed");
fprintf(stderr, "\n %s", mysql_stmt_error(stmt));
exit(0);
}
/* Get the total rows affected */
affected_rows= mysql_stmt_affected_rows(stmt);
fprintf(stdout, "\n total affected rows: %lld", affected_rows);
if (affected_rows != 1) /* validate affected rows */
{
fprintf(stderr, "\n invalid affected rows by MySQL");
exit(0);
}
/* Close the statement */
if (mysql_stmt_close(stmt))
{
fprintf(stderr, "\n failed while closing the statement");
fprintf(stderr, "\n %s", mysql_stmt_error(stmt));
exit(0);
}
/* DROP THE TABLE */
if (mysql_query(mysql,"DROP TABLE test_table"))
{
fprintf(stderr, "\n drop table failed");
fprintf(stderr, "\n %s", mysql_error(mysql));
exit(0);
}
fprintf(stdout, "Success, MySQL prepared statements are working!!!");
mysql_stmt_affected_rows()
ulonglong mysql_stmt_affected_rows(MYSQL_STMT *stmt)
Returns total number of rows changed by the last execute statement. May be called immediatlely after mysql_execute() for UPDATE,DELETE,or INSERT statements.For SELECT statements, mysql_stmt_affected rows works like mysql_num_rows().
An integer greater than zero indicates the number of rows affected or retrieved. Zero indicates that no records where updated for an UPDATE statement, no rows matched the WHERE clause in the query or that no query has yet been executed. -1 indicates that the query returned an error or that, for a SELECT query, mysql_stmt_affected_rows() was called prior to calling mysql_fetch().
None.
For the usage of mysql_stmt_affected_rows() refer to the Example
from section 8.1.7.21 mysql_execute().
mysql_bind_result()
my_bool mysql_bind_result(MYSQL_STMT *stmt, MYSQL_BIND *bind)
mysql_bind_result() is used to associate, or bind, columns in the
resultset to data buffers and length buffers. When mysql_fetch() is
called to fetch data, the MySQL client protocol returns the data for the
bound columns in the specified buffers.
Note that all columns must be bound prior to calling mysql_fetch()
in case of fetching the data to buffers; else mysql_fetch() simply ignores
the data fetch; also the buffers should be sufficient enough to hold the
data as the protocol doesn't return the data in chunks.
A column can be bound or rebound at any time, even after data has been
fetched from the result set. The new binding takes effect the next time
mysql_fetch() is called. For example, suppose an application binds
the columns in a result set and calls mysql_fetch(). The mysql
protocol returns data in the bound buffers. Now suppose the application
binds the columns to a different set of buffers, then the protocol does
not place the data for the just fetched row in the newly bound
buffers. Instead, it does when the next mysql_fetch() is called.
To bind a column, an application calls mysql_bind_result() and
passes the type, address, and the address of the length buffer.
The supported buffer types are:
Zero if the bind was successful. Non-zero if an error occured.
CR_NO_PREPARE_STMT
CR_UNSUPPORTED_PARAM_TYPE
CR_OUT_OF_MEMOR
CR_UNKNOWN_ERROR
For the usage of mysql_bind_result() refer to the Example from
section 8.1.7.56 mysql_fetch()
mysql_stmt_store_result()
int mysql_stmt_store_result(MYSQL_STMT *stmt)
You must call mysql_stmt_store_result() for every query that
successfully retrieves
data(SELECT,SHOW,DESCRIBE,EXPLAIN), and only
if you want to buffer the complete result set by the client, so that the
subsequent mysql_fetch() call returns buffered data.
You don't have to call mysql_stmt_store_result() for other
queries, but it will not harm or cause any notable performance in all
cases.You can detect if the query didn't have a result set by checking
if mysql_prepare_result() returns 0. For more information refer
to section 8.1.7.11 mysql_prepare_result().
Zero if the results are buffered successfully or Non Zero in case of an error.
CR_COMMANDS_OUT_OF_SYNC
CR_OUT_OF_MEMORY
CR_SERVER_GONE_ERROR
CR_SERVER_LOST
CR_UNKNOWN_ERROR
mysql_stmt_data_seek()
void mysql_stmt_data_seek(MYSQL_STMT *stmt, my_ulonglong offset)
Seeks to an arbitrary row in a statement result set. This requires that the
statement result set structure contains the entire result of the last
executed query, so mysql_stmt_data_seek() may be used in
conjunction only with mysql_stmt_store_result().
The offset should be a value in the range from 0 to
mysql_stmt_num_rows(stmt)-1.
None.
None.
mysql_stmt_row_seek()
MYSQL_ROW_OFFSET mysql_stmt_row_seek(MYSQL_STMT *stmt, MYSQL_ROW_OFFSET offset)
Sets the row cursor to an arbitrary row in a statement result set. This requires
that the result set structure contains the entire result of the query, so
mysql_stmt_row_seek() may be used in conjunction only with
mysql_stmt_store_result().
The offset should be a value returned from a call to mysql_stmt_row_tell()
or to mysql_stmt_row_seek(). This value is not simply a row number; if you
want to seek to a row within a result set using a row number, use
mysql_stmt_data_seek() instead.
The previous value of the row cursor. This value may be passed to a
subsequent call to mysql_stmt_row_seek().
None.
mysql_stmt_row_tell()
MYSQL_ROW_OFFSET mysql_stmt_row_tell(MYSQL_STMT *stmt)
Returns the current position of the row cursor for the last
mysql_fetch(). This value can be used as an argument to
mysql_stmt_row_seek().
You should use mysql_stmt_row_tell() only after mysql_stmt_store_result().
The current offset of the row cursor.
None.
mysql_stmt_num_rows()
my_ulonglong mysql_stmt_num_rows(MYSQL_STMT *stmt)
Returns the number of rows in the result set.
The use of mysql_stmt_num_rows() depends on whether you used
mysql_stmt_store_result() to buffer the entire result set in the
statement handle or not.
If you use mysql_stmt_store_result(), mysql_stmt_num_rows() may be
called immediately.
The number of rows in the result set.
None.
mysql_fetch()
int mysql_fetch(MYSQL_STMT *stmt)
mysql_fetch() returns the next rowset in the result set. It can
be called only while the result set exists i.e. after a call to
mysql_execute() that creates a result set or after
mysql_stmt_store_result(), which is called after
mysql_execute() to buffer the entire result set.
If row buffers are bound using mysql_bind_result(), it returns
the data in those buffers for all the columns in the current row
set and the lengths are returned to the length pointer.
Note that, all columns must be bound by the application.
If the data fetched is a NULL data, then the is_null value from
MYSQL_BIND contains TRUE, 1, else the data and its length is
returned to *buffer and *length variables based on the
buffer type specified by the application. All numeric, float and double
types have the fixed length(in bytes) as listed below:
| Type | Length |
| MYSQL_TYPE_TINY | 1 |
| MYSQL_TYPE_SHORT | 2 |
| MYSQL_TYPE_LONG | 4 |
| MYSQL_TYPE_FLOAT | 4 |
| MYSQL_TYPE_LONGLONG | 8 |
| MYSQL_TYPE_DOUBLE | 8 |
| MYSQL_TYPE_TIME | sizeof(MYSQL_TIME) |
| MYSQL_TYPE_DATE | sizeof(MYSQL_TIME) |
| MYSQL_TYPE_DATETIME | sizeof(MYSQL_TIME) |
| MYSQL_TYPE_TIMESTAMP | sizeof(MYSQL_TIME) |
| MYSQL_TYPE_STRING | data length |
| MYSQL_TYPE_VAR_STRING | data_length |
| MYSQL_TYPE_BLOB | data_length |
| MYSQL_TYPE_TINY_BLOB | data_length |
| MYSQL_TYPE_MEDIUM_BLOB | data_length |
| MYSQL_TYPE_LONG_BLOB | data_length |
where *data_length is nothing but the 'Actual length of the data'.
| Return Value | Description |
| 0 | Successful, the data has been fetched to application data buffers. |
| 1 | Error occured. Error code and
message can be obtained by calling mysql_stmt_errno() and mysql_stmt_error().
|
| 100, MYSQL_NO_DATA | No more rows/data exists |
CR_COMMANDS_OUT_OF_SYNC
CR_OUT_OF_MEMORY
CR_SERVER_GONE_ERROR
CR_SERVER_LOST
CR_UNKNOWN_ERROR
CR_UNSUPPORTED_PARAM_TYPE
mysql_bind_result().
The following example explains the usage of mysql_prepare_result,
mysql_bind_result(), and mysql_fetch()
MYSQL_STMT *stmt;
MYSQL_BIND bind[2];
MYSQL_RES *result;
int int_data;
long int_length, str_length;
char str_data[50];
my_bool is_null[2];
query= "SELECT col1, col2 FROM test_table WHERE col1= 10)");
if (!(stmt= mysql_prepare(mysql, query, strlen(query)))
{
fprintf(stderr, "\n prepare failed");
fprintf(stderr, "\n %s", mysql_error(mysql));
exit(0);
}
/* Get the fields meta information */
if (!(result= mysql_prepare_result(stmt)))
{
fprintf(stderr, "\n prepare_result failed");
fprintf(stderr, "\n %s", mysql_stmt_error(stmt));
exit(0);
}
fprintf(stdout, "Total fields: %ld", mysql_num_fields(result));
if (mysql_num_fields(result) != 2)
{
fprintf(stderr, "\n prepare returned invalid field count");
exit(0);
}
/* Execute the SELECT query */
if (mysql_execute(stmt))
{
fprintf(stderr, "\n execute failed");
fprintf(stderr, "\n %s", mysql_stmt_error(stmt));
exit(0);
}
/* Bind the result data buffers */
bind[0].buffer_type= MYSQL_TYPE_LONG;
bind[0].buffer= (char *)&int_data;
bind[0].is_null= &is_null[0];
bind[0].length= &int_length;
bind[1].buffer_type= MYSQL_TYPE_VAR_STRING;
bind[1].buffer= (void *)str_data;
bind[1].buffer_length= 20;
bind[1].is_null= &is_null[1];
bind[1].length= &str_length;
if (mysql_bind_result(stmt, bind))
{
fprintf(stderr, "\n bind_result failed");
fprintf(stderr, "\n %s", mysql_stmt_error(stmt));
exit(0);
}
/* Now fetch data to buffers */
if (mysql_fetch(stmt))
{
fprintf(stderr, "\n fetch failed");
fprintf(stderr, "\n %s", mysql_stmt_error(stmt));
exit(0);
}
if (is_null[0])
fprintf(stdout, "\n Col1 data is NULL");
else
fprintf(stdout, "\n Col1: %d, length: %ld", int_data, int_length);
if (is_null[1])
fprintf(stdout, "\n Col2 data is NULL");
else
fprintf(stdout, "\n Col2: %s, length: %ld", str_data, str_length);
/* call mysql_fetch again */
if (mysql_fetch(stmt) |= MYSQL_NO_DATA)
{
fprintf(stderr, "\n fetch return more than one row);
exit(0);
}
/* Free the prepare result meta information */
mysql_free_result(result);
/* Free the statement handle */
if (mysql_stmt_free(stmt))
{
fprintf(stderr, "\n failed to free the statement handle);
fprintf(stderr, "\n %s", mysql_stmt_error(stmt));
exit(0);
}
mysql_send_long_data()
int mysql_send_long_data(MYSQL_STMT *stmt, unsigned int
parameter_number, const char *data, ulong length)
Allows an application to send the data in pieces or chunks to server. This function can be used to send character or binary data values in parts to a column(it must be a text or blob) with a character or binary data type.
The data is a pointer to buffer containing the actual data for
the parameter represendted by parameter_number. The length
indicates the amount of data to be sent in bytes.
Zero if the data is sent successfully to server. Non-zero if an error occured.
CR_INVALID_PARAMETER_NO
CR_COMMANDS_OUT_OF_SYNC
CR_SERVER_GONE_ERROR
CR_OUT_OF_MEMOR
CR_UNKNOWN_ERROR
The following example explains how to send the data in chunks to text column:
MYSQL_BIND bind[1];
long length;
query= "INSERT INTO test_long_data(text_column) VALUES(?)");
if (!mysql_prepare(mysql, query, strlen(query))
{
fprintf(stderr, "\n prepare failed");
fprintf(stderr, "\n %s", mysql_error(mysql));
exit(0);
}
memset(bind, 0, sizeof(bind));
bind[0].buffer_type= MYSQL_TYPE_STRING;
bind[0].length= &length;
bind[0].is_null= 0;
/* Bind the buffers */
if (mysql_bind_param(stmt, bind))
{
fprintf(stderr, "\n param bind failed");
fprintf(stderr, "\n %s", mysql_stmt_error(stmt));
exit(0);
}
/* Supply data in chunks to server */
if (!mysql_send_long_data(stmt,1,"MySQL",5))
{
fprintf(stderr, "\n send_long_data failed");
fprintf(stderr, "\n %s", mysql_stmt_error(stmt));
exit(0);
}
/* Supply the next piece of data */
if (mysql_send_long_data(stmt,1," - The most popular open source database",40))
{
fprintf(stderr, "\n send_long_data failed");
fprintf(stderr, "\n %s", mysql_stmt_error(stmt));
exit(0);
}
/* Now, execute the query */
if (mysql_execute(stmt))
{
fprintf(stderr, "\n mysql_execute failed");
fprintf(stderr, "\n %s", mysql_stmt_error(stmt));
exit(0);
}
This inserts the data, "MySQL - The most popular open source database"
to the field 'text_column'.
mysql_stmt_close()
my_bool mysql_stmt_close(MYSQL_STMT *)
Closes the prepared statement. mysql_stmt_close() also
deallocates the statement handle pointed to by stmt.
If the current query results are pending or un-read; this cancels the query results; so that next call can be executed.
Zero if the statement was freed successfully. Non-zero if an error occured.
CR_SERVER_GONE_ERROR
CR_UNKNOWN_ERROR
For the usage of mysql_stmt_close() refer to the Example from
section 8.1.7.21 mysql_execute().
mysql_stmt_errno()
unsigned int mysql_stmt_errno(MYSQL_STMT *stmt)
For the statement specified by stmt, mysql_stmt_errno()
returns the error code for the most recently invoked statement API
function that can succeed or fail. A return value of zero means that no
error occured. Client error message numbers are listed in the MySQL
errmsg.h header file. Server error message numbers are listed in
mysqld_error.h. In the MySQL source distribution you can find a complete
list of error messages and error numbers in the file Docs/mysqld_error.txt
An error code value. Zero if no error occured.
None
mysql_stmt_error()
char *mysql_stmt_error(MYSQL_STMT *stmt)
For the statement specified by stmt, mysql_stmt_error()
returns the error message for the most recently invoked statement API
that can succeed or fail. An empty string ("") is returned if no error
occured. This means the following two sets are equivalent:
if (mysql_stmt_errno(stmt))
{
// an error occured
}
if (mysql_stmt_error(stmt))
{
// an error occured
}
The language of the client error messages many be changed by recompiling the MySQL client library. Currently you can choose error messages in several different languages.
A character string that describes the error. An empry string if no error occured.
None
mysql_stmt_sqlstate()
const char *mysql_stmt_sqlstate(MYSQL_STMT *stmt)
Works like the corresponding mysql_sqlstate function for prepared
statements. See section 8.1.3.205 mysql_sqlstate().
Returns the SQLSTATE error code for the last error for the prepared statement.
This function was added to MySQL 4.1.1.
See section 8.1.3.51 mysql_errno().
See section 8.1.3.55 mysql_error().
See section 8.1.3.205 mysql_sqlstate().
From version 4.1, MySQL supports the multi query execution in a single
command. In order to do this, you must set the client flag
CLIENT_MULTI_QUERIES option when opening the connection.
By default mysql_query() or mysql_real_query() returns
only the first query status and the subsequent queries status can
be processed using mysql_more_results() and
mysql_next_result().
/* Connect to server with option CLIENT_MULTI_QUERIES */
mysql_real_connect(..., CLIENT_MULTI_QUERIES);
/* Now execute multiple queries */
mysql_query(mysql,"DROP TABLE IF EXISTS test_table;\
CREATE TABLE test_table(id INT);\
INSERT INTO test_table VALUES(10);\
UPDATE test_table SET id=20 WHERE id=10;\
SELECT * FROM test_table;\
DROP TABLE test_table";
while (mysql_more_results(mysql))
{
/* Process all results */
mysql_next_result(mysql);
...
printf("total affected rows: %lld", mysql_affected_rows(mysql));
...
if ((result= mysql_store_result(mysql))
{
/* Returned a result set, process it */
}
}
Using the new binary protocol from MySQL 4.1 and above, one can send and
receive the DATE, TIME and TIMESTAMP data using the MYSQL_TIME
structure.
MYSQL_TIME structure consites of the following members:
In order to send the data, one must use the prepared statements through
mysql_prepare() and mysql_execute(); and must bind the
parameter using type as MYSQL_TYPE_DATE inorder to process date
value, MYSQL_TYPE_TIME for time and MYSQL_TYPE_DATETIME or
MYSQL_TYPE_TIMESTAMP for datetime/timestamp using
mysql_bind_param() when sending and mysql_bind_results()
while receiving the data.
Here is a simple example; which inserts the DATE, TIME and TIMESTAMP data.
MYSQL_TIME ts;
MYSQL_BIND bind[3];
MYSQL_STMT *stmt;
strmov(query, "INSERT INTO test_table(date_field, time_field,
timestamp_field) VALUES(?,?,?");
stmt= mysql_prepare(mysql, query, strlen(query)));
/* setup input buffers for all 3 parameters */
bind[0].buffer_type= MYSQL_TYPE_DATE;
bind[0].buffer= (char *)&ts;
bind[0].is_null= 0;
bind[0].length= 0;
..
bind[1]= bind[2]= bind[0];
..
mysql_bind_param(stmt, bind);
/* supply the data to be sent is the ts structure */
ts.year= 2002;
ts.month= 02;
ts.day= 03;
ts.hour= 10;
ts.minute= 45;
ts.second= 20;
mysql_execute(stmt);
..
You need to use the following functions when you want to create a threaded client. See section 8.1.14 How to Make a Threaded Client.
my_init()
void my_init(void)
This function needs to be called once in the program before calling any
MySQL function. This initialises some global variables that MySQL
needs. If you are using a thread-safe client library, this will also
call mysql_thread_init() for this thread.
This is automatically called by mysql_init(),
mysql_server_init() and mysql_connect().
None.
mysql_thread_init()
my_bool mysql_thread_init(void)
This function needs to be called for each created thread to initialise thread-specific variables.
This is automatically called by my_init() and mysql_connect().
None.
mysql_thread_end()
void mysql_thread_end(void)
This function needs to be called before calling pthread_exit() to
free memory allocated by mysql_thread_init().
Note that this function is not invoked automatically by the client library. It must be called explicitly to avoid a memory leak.
None.
mysql_thread_safe()
unsigned int mysql_thread_safe(void)
This function indicates whether the client is compiled as thread-safe.
1 is the client is thread-safe, 0 otherwise.
You must use the following functions if you want to allow your application to be linked against the embedded MySQL server library. See section 8.1.15 libmysqld, the Embedded MySQL Server Library.
If the program is linked with -lmysqlclient instead of
-lmysqld, these functions do nothing. This makes it
possible to choose between using the embedded MySQL server and
a stand-alone server without modifying any code.
mysql_server_init()
int mysql_server_init(int argc, char **argv, char **groups)
This function must be called once in the program using the
embedded server before calling any other MySQL function. It starts up
the server and initialises any subsystems (mysys, InnoDB, etc.)
that the server uses. If this function is not called, the program will
crash. If you are using the DBUG package that comes with MySQL, you
should call this after you have called MY_INIT().
The argc and argv arguments are analogous to the arguments
to main(). The first element of argv is ignored (it
typically contains the program name). For convenience, argc may
be 0 (zero) if there are no command-line arguments for the
server. mysql_server_init() makes a copy of the arguments so
it's safe to destroy argv or groups after the call.
The NULL-terminated list of strings in groups
selects which groups in the option files will be active.
See section 4.1.2 `my.cnf' Option Files. For convenience, groups may be
NULL, in which case the [server] and [emedded] groups
will be active.
#include <mysql.h>
#include <stdlib.h>
static char *server_args[] = {
"this_program", /* this string is not used */
"--datadir=.",
"--key_buffer_size=32M"
};
static char *server_groups[] = {
"embedded",
"server",
"this_program_SERVER",
(char *)NULL
};
int main(void) {
mysql_server_init(sizeof(server_args) / sizeof(char *),
server_args, server_groups);
/* Use any MySQL API functions here */
mysql_server_end();
return EXIT_SUCCESS;
}
0 if okay, 1 if an error occurred.
mysql_server_end()
void mysql_server_end(void)
This function must be called once in the program after all other MySQL functions. It shuts down the embedded server.
None.
mysql_query() Returns Success, mysql_store_result() Sometimes Returns NULL?
It is possible for mysql_store_result() to return NULL
following a successful call to mysql_query(). When this happens, it
means one of the following conditions occurred:
malloc() failure (for example, if the result set was too
large).
INSERT,
UPDATE, or DELETE).
You can always check whether the statement should have produced a
non-empty result by calling mysql_field_count(). If
mysql_field_count() returns zero, the result is empty and the last
query was a statement that does not return values (for example, an
INSERT or a DELETE). If mysql_field_count() returns a
non-zero value, the statement should have produced a non-empty result.
See the description of the mysql_field_count() function for an
example.
You can test for an error by calling mysql_error() or
mysql_errno().
In addition to the result set returned by a query, you can also get the following information:
mysql_affected_rows() returns the number of rows affected by the last
query when doing an INSERT, UPDATE, or DELETE. An
exception is that if DELETE is used without a WHERE clause, the
table is re-created empty, which is much faster! In this case,
mysql_affected_rows() returns zero for the number of records
affected.
mysql_num_rows() returns the number of rows in a result set. With
mysql_store_result(), mysql_num_rows() may be called as soon as
mysql_store_result() returns. With mysql_use_result(),
mysql_num_rows() may be called only after you have fetched all the
rows with mysql_fetch_row().
mysql_insert_id() returns the ID generated by the last
query that inserted a row into a table with an AUTO_INCREMENT index.
See section 8.1.3.130 mysql_insert_id().
LOAD DATA INFILE ..., INSERT INTO
... SELECT ..., UPDATE) return additional information. The result is
returned by mysql_info(). See the description for mysql_info()
for the format of the string that it returns. mysql_info() returns a
NULL pointer if there is no additional information.
If you insert a record in a table containing a column that has the
AUTO_INCREMENT attribute, you can get the most recently generated
ID by calling the mysql_insert_id() function.
You can also retrieve the ID by using the LAST_INSERT_ID() function in
a query string that you pass to mysql_query().
You can check if an AUTO_INCREMENT index is used by executing
the following code. This also checks if the query was an INSERT with
an AUTO_INCREMENT index:
if (mysql_error(&mysql)[0] == 0 &&
mysql_num_fields(result) == 0 &&
mysql_insert_id(&mysql) != 0)
{
used_id = mysql_insert_id(&mysql);
}
The most recently generated ID is maintained in the server on a
per-connection basis. It will not be changed by another client. It will not
even be changed if you update another AUTO_INCREMENT column with a
non-magic value (that is, a value that is not NULL and not 0).
If you want to use the ID that was generated for one table and insert it into a second table, you can use SQL statements like this:
INSERT INTO foo (auto,text)
VALUES(NULL,'text'); # generate ID by inserting NULL
INSERT INTO foo2 (id,text)
VALUES(LAST_INSERT_ID(),'text'); # use ID in second table
When linking with the C API, the following errors may occur on some systems:
gcc -g -o client test.o -L/usr/local/lib/mysql -lmysqlclient -lsocket -lnsl Undefined first referenced symbol in file floor /usr/local/lib/mysql/libmysqlclient.a(password.o) ld: fatal: Symbol referencing errors. No output written to client
If this happens on your system, you must include the math library by
adding -lm to the end of the compile/link line.
If you compile MySQL clients that you've written yourself or that
you obtain from a third-party, they must be linked using the
-lmysqlclient -lz option on the link command. You may also need to
specify a -L option to tell the linker where to find the library. For
example, if the library is installed in `/usr/local/mysql/lib', use
-L/usr/local/mysql/lib -lmysqlclient -lz on the link command.
For clients that use MySQL header files, you may need to specify a
-I option when you compile them (for example,
-I/usr/local/mysql/include), so the compiler can find the header
files.
To make the above simpler on Unix we have provied the
mysql_config script for you. See section 4.8.9 mysql_config, Get compile options for compiling clients.
You can use this to compile a MySQL client by as follows:
CFG=/usr/local/mysql/bin/mysql_config sh -c "gcc -o progname `$CFG --cflags` progname.c `$CFG --libs`"
The sh -c is need to get the shell to not threat the output from
mysql_config as one word.
The client library is almost thread-safe. The biggest problem is
that the subroutines in `net.c' that read from sockets are not
interrupt safe. This was done with the thought that you might want to
have your own alarm that can break a long read to a server. If you
install interrupt handlers for the SIGPIPE interrupt,
the socket handling should be thread-safe.
In the older binaries we distribute on our web site (http://www.mysql.com/), the client libraries are not normally compiled with the thread-safe option (the Windows binaries are by default compiled to be thread-safe). Newer binary distributions should have both a normal and a thread-safe client library.
To get a threaded client where you can interrupt the client from other
threads and set timeouts when talking with the MySQL server, you should
use the -lmysys, -lmystrings, and -ldbug libraries and
the net_serv.o code that the server uses.
If you don't need interrupts or timeouts, you can just compile a
thread-safe client library (mysqlclient_r) and use this. See section 8.1 MySQL C API. In this case you don't have to worry about the
net_serv.o object file or the other MySQL libraries.
When using a threaded client and you want to use timeouts and
interrupts, you can make great use of the routines in the
`thr_alarm.c' file. If you are using routines from the
mysys library, the only thing you must remember is to call
my_init() first! See section 8.1.10 C API Threaded Function Descriptions.
All functions except mysql_real_connect() are by default
thread-safe. The following notes describe how to compile a thread-safe
client library and use it in a thread-safe manner. (The notes below for
mysql_real_connect() actually apply to mysql_connect() as
well, but because mysql_connect() is deprecated, you should be
using mysql_real_connect() anyway.)
To make mysql_real_connect() thread-safe, you must recompile the
client library with this command:
shell> ./configure --enable-thread-safe-client
This will create a thread-safe client library libmysqlclient_r.
(Assuming your OS has a thread-safe gethostbyname_r() function.)
This library is thread-safe per connection. You can let two threads
share the same connection with the following caveats:
mysql_query() and mysql_store_result() no other thread is using
the same connection.
mysql_store_result().
mysql_use_result, you have to ensure that no other thread
is using the same connection until the result set is closed.
However, it really is best for threaded clients that share the same
connection to use mysql_store_result().
mysql_query() and
mysql_store_result() call combination. Once
mysql_store_result() is ready, the lock can be released and other
threads may query the same connection.
pthread_mutex_lock() and pthread_mutex_unlock() to
establish and release a mutex lock.
You need to know the following if you have a thread that is calling MySQL functions which did not create the connection to the MySQL database:
When you call mysql_init() or mysql_connect(), MySQL will
create a thread-specific variable for the thread that is used by the
debug library (among other things).
If you call a MySQL function, before the thread has
called mysql_init() or mysql_connect(), the thread will
not have the necessary thread-specific variables in place and you are
likely to end up with a core dump sooner or later.
The get things to work smoothly you have to do the following:
my_init() at the start of your program if it calls
any other MySQL function before calling mysql_real_connect().
mysql_thread_init() in the thread handler before calling
any MySQL function.
mysql_thread_end() before calling
pthread_exit(). This will free the memory used by MySQL
thread-specific variables.
You may get some errors because of undefined symbols when linking your
client with libmysqlclient_r. In most cases this is because you haven't
included the thread libraries on the link/compile line.
The embedded MySQL server library makes it possible to run a full-featured MySQL server inside the client application. The main benefits are increased speed and more simple management for embedded applications.
The API is identical for the embedded MySQL version and the client/server version. To change an old threaded application to use the embedded library, you normally only have to add calls to the following functions:
| Function | When to call |
mysql_server_init() | Should be called before any other MySQL function is called, preferably early in the main() function.
|
mysql_server_end() | Should be called before your program exits. |
mysql_thread_init() | Should be called in each thread you create that will access MySQL. |
mysql_thread_end() | Should be called before calling pthread_exit()
|
Then you must link your code with `libmysqld.a' instead of `libmysqlclient.a'.
The above mysql_server_xxx functions are also included in
`libmysqlclient.a' to allow you to change between the embedded and the
client/server version by just linking your application with the right
library. See section 8.1.11.1 mysql_server_init().
libmysqld
To get a libmysqld library you should configure MySQL with the
--with-embedded-server option.
When you link your program with libmysqld, you must also include
the system-specific pthread libraries and some libraries that
the MySQL server uses. You can get the full list of libraries by executing
mysql_config --libmysqld-libs.
The correct flags for compiling and linking a threaded program must be used, even if you do not directly call any thread functions in your code.
The embedded server has the following limitations:
Some of these limitations can be changed by editing the `mysql_embed.h' include file and recompiling MySQL.
The following is the recommended way to use option files to make it easy to switch between a client/server application and one where MySQL is embedded. See section 4.1.2 `my.cnf' Option Files.
[server] section. These will be read by
both MySQL versions.
[mysqld] section.
[embedded] section.
[ApplicationName_SERVER]
section.
This example program and makefile should work without any changes on a Linux or FreeBSD system. For other operating systems, minor changes will be needed. This example is designed to give enough details to understand the problem, without the clutter that is a necessary part of a real application.
To try out the example, create an `test_libmysqld' directory at the same level as the mysql-4.0 source directory. Save the `test_libmysqld.c' source and the `GNUmakefile' in the directory, and run GNU `make' from inside the `test_libmysqld' directory.
`test_libmysqld.c'
/*
* A simple example client, using the embedded MySQL server library
*/
#include <mysql.h>
#include <stdarg.h>
#include <stdio.h>
#include <stdlib.h>
MYSQL *db_connect(const char *dbname);
void db_disconnect(MYSQL *db);
void db_do_query(MYSQL *db, const char *query);
const char *server_groups[] = {
"test_libmysqld_SERVER", "embedded", "server", NULL
};
int
main(int argc, char **argv)
{
MYSQL *one, *two;
/* mysql_server_init() must be called before any other mysql
* functions.
*
* You can use mysql_server_init(0, NULL, NULL), and it will
* initialise the server using groups = {
* "server", "embedded", NULL
* }.
*
* In your $HOME/.my.cnf file, you probably want to put:
[test_libmysqld_SERVER]
language = /path/to/source/of/mysql/sql/share/english
* You could, of course, modify argc and argv before passing
* them to this function. Or you could create new ones in any
* way you like. But all of the arguments in argv (except for
* argv[0], which is the program name) should be valid options
* for the MySQL server.
*
* If you link this client against the normal mysqlclient
* library, this function is just a stub that does nothing.
*/
mysql_server_init(argc, argv, (char **)server_groups);
one = db_connect("test");
two = db_connect(NULL);
db_do_query(one, "SHOW TABLE STATUS");
db_do_query(two, "SHOW DATABASES");
mysql_close(two);
mysql_close(one);
/* This must be called after all other mysql functions */
mysql_server_end();
exit(EXIT_SUCCESS);
}
static void
die(MYSQL *db, char *fmt, ...)
{
va_list ap;
va_start(ap, fmt);
vfprintf(stderr, fmt, ap);
va_end(ap);
(void)putc('\n', stderr);
if (db)
db_disconnect(db);
exit(EXIT_FAILURE);
}
MYSQL *
db_connect(const char *dbname)
{
MYSQL *db = mysql_init(NULL);
if (!db)
die(db, "mysql_init failed: no memory");
/*
* Notice that the client and server use separate group names.
* This is critical, because the server will not accept the
* client's options, and vice versa.
*/
mysql_options(db, MYSQL_READ_DEFAULT_GROUP, "test_libmysqld_CLIENT");
if (!mysql_real_connect(db, NULL, NULL, NULL, dbname, 0, NULL, 0))
die(db, "mysql_real_connect failed: %s", mysql_error(db));
return db;
}
void
db_disconnect(MYSQL *db)
{
mysql_close(db);
}
void
db_do_query(MYSQL *db, const char *query)
{
if (mysql_query(db, query) != 0)
goto err;
if (mysql_field_count(db) > 0)
{
MYSQL_RES *res;
MYSQL_ROW row, end_row;
int num_fields;
if (!(res = mysql_store_result(db)))
goto err;
num_fields = mysql_num_fields(res);
while ((row = mysql_fetch_row(res)))
{
(void)fputs(">> ", stdout);
for (end_row = row + num_fields; row < end_row; ++row)
(void)printf("%s\t", row ? (char*)*row : "NULL");
(void)fputc('\n', stdout);
}
(void)fputc('\n', stdout);
}
else
(void)printf("Affected rows: %lld\n", mysql_affected_rows(db));
mysql_free_result(res);
return;
err:
die(db, "db_do_query failed: %s [%s]", mysql_error(db), query);
}
`GNUmakefile'
# This assumes the MySQL software is installed in /usr/local/mysql inc := /usr/local/mysql/include/mysql lib := /usr/local/mysql/lib # If you have not installed the MySQL software yet, try this instead #inc := $(HOME)/mysql-4.0/include #lib := $(HOME)/mysql-4.0/libmysqld CC := gcc CPPFLAGS := -I$(inc) -D_THREAD_SAFE -D_REENTRANT CFLAGS := -g -W -Wall LDFLAGS := -static # You can change -lmysqld to -lmysqlclient to use the # client/server library LDLIBS = -L$(lib) -lmysqld -lz -lm -lcrypt ifneq (,$(shell grep FreeBSD /COPYRIGHT 2>/dev/null)) # FreeBSD LDFLAGS += -pthread else # Assume Linux LDLIBS += -lpthread endif # This works for simple one-file test programs sources := $(wildcard *.c) objects := $(patsubst %c,%o,$(sources)) targets := $(basename $(sources)) all: $(targets) clean: rm -f $(targets) $(objects) *.core
The MySQL source code is covered by the GNU GPL license
(see section H GNU General Public License). One result of this is that any program
which includes, by linking with libmysqld, the MySQL
source code must be released as free software (under a license
compatible with the GPL).
We encourage everyone to promote free software by releasing
code under the GPL or a compatible license. For those who
are not able to do this, another option is to purchase a
commercial licence for the MySQL code from MySQL AB.
For details, please see section 1.4.3 MySQL Licenses.
MySQL provides support for ODBC by means of the MyODBC
program. This chapter will teach you how to install MyODBC,
and how to use it. Here, you will also find a list of common programs that
are known to work with MyODBC.
MyODBC 2.50 is a 32-bit ODBC 2.50 specification level 0 (with
level 1 and level 2 features) driver for connecting an ODBC-aware
application to MySQL. MyODBC works on Windows 9x/Me/NT/2000/XP
and most Unix platforms.
MyODBC 3.51 is an enhanced version with ODBC 3.5x specification
level 1 (complete core API + level 2 features).
MyODBC is Open Source, and you can find the newest
version at http://www.mysql.com/downloads/api-myodbc.html.
Please note that the 2.50.x versions are LGPL licensed,
whereas the 3.51.x versions are GPL licensed.
If you have problem with MyODBC and your program also works
with OLEDB, you should try the OLEDB driver.
Normally you only need to install MyODBC on Windows machines.
You only need MyODBC for Unix if you have a program like
ColdFusion that is running on the Unix machine and uses ODBC to connect
to the databases.
If you want to install MyODBC on a Unix box, you will also need
an ODBC manager. MyODBC is known to work with
most of the Unix ODBC managers.
To install MyODBC on Windows, you should download the
appropriate MyODBC `.zip' file,
unpack it with WinZIP or some similar program,
and execute the `SETUP.EXE' file.
On Windows/NT/XP you may get the following error when trying to install
MyODBC:
An error occurred while copying C:\WINDOWS\SYSTEM\MFC30.DLL. Restart Windows and try installing again (before running any applications which use ODBC)
The problem in this case is that some other program is using ODBC and
because of how Windows is designed, you may not in this case be able to
install a new ODBC drivers with Microsoft's ODBC setup program. In most
cases you can continue by just pressing Ignore to copy the rest
of the MyODBC files and the final installation should still work. If
this doesn't work, the solution is to reboot your computer in ``safe
mode`` (Choose this by pressing F8 just before your machine starts
Windows during rebooting), install MyODBC, and reboot to normal
mode.
MyODBC on the Windows machine.
GRANT command. See section 4.3.1 GRANT and REVOKE Syntax.
Notice that there are other configuration options on the screen of MySQL (trace, don't prompt on connect, etc) that you can try if you run into problems.
There are three possibilities for specifying the server name on Windows95:
ip hostnameFor example:
194.216.84.21 my_hostname
Example of how to fill in the ODBC setup:
Windows DSN name: test Description: This is my test database MySql Database: test Server: 194.216.84.21 User: monty Password: my_password Port:
The value for the Windows DSN name field is any name that is unique
in your Windows ODBC setup.
You don't have to specify values for the Server, User,
Password, or Port fields in the ODBC setup screen.
However, if you do, the values will be used as the defaults later when
you attempt to make a connection. You have the option of changing the
values at that time.
If the port number is not given, the default port (3306) is used.
If you specify the option Read options from C:\my.cnf, the groups
client and odbc will be read from the `C:\my.cnf' file.
You can use all options that are usable by mysql_options().
See section 8.1.3.163 mysql_options().
One can specify the following parameters for MyODBC on
the [Servername] section of an `ODBC.INI' file or
through the InConnectionString argument in the
SQLDriverConnect() call.
| Parameter | Default value | Comment |
| user | ODBC (on Windows) | The username used to connect to MySQL. |
| server | localhost | The hostname of the MySQL server. |
| database | The default database. | |
| option | 0 | A integer by which you can specify how MyODBC should work. See below.
|
| port | 3306 | The TCP/IP port to use if server is not localhost.
|
| stmt | A statement that will be executed when connecting to MySQL.
| |
| password | The password for the server user combination.
| |
| socket | The socket or Windows pipe to connect to. |
The option argument is used to tell MyODBC that the client isn't 100%
ODBC compliant. On Windows, one normally sets the option flag by
toggling the different options on the connection screen but one can also
set this in the option argument. The following options are listed in the
same order as they appear in the MyODBC connect screen:
| Bit | Description |
| 1 | The client can't handle that MyODBC returns the real width of a column.
|
| 2 | The client can't handle that MySQL returns the true value of affected rows. If this flag is set then MySQL returns 'found rows' instead. One must have MySQL 3.21.14 or newer to get this to work. |
| 4 | Make a debug log in c:\myodbc.log. This is the same as putting MYSQL_DEBUG=d:t:O,c::\myodbc.log in `AUTOEXEC.BAT'
|
| 8 | Don't set any packet limit for results and parameters. |
| 16 | Don't prompt for questions even if driver would like to prompt |
| 32 | Simulate a ODBC 1.0 driver in some context. |
| 64 | Ignore use of database name in 'database.table.column'. |
| 128 | Force use of ODBC manager cursors (experimental). |
| 256 | Disable the use of extended fetch (experimental). |
| 512 | Pad CHAR fields to full column length. |
| 1024 | SQLDescribeCol() will return fully qualified column names |
| 2048 | Use the compressed server/client protocol |
| 4096 | Tell server to ignore space after function name and before '(' (needed by PowerBuilder). This will make all function names keywords!
|
| 8192 | Connect with named pipes to a mysqld server running on NT.
|
| 16384 | Change LONGLONG columns to INT columns (some applications can't handle LONGLONG). |
| 32768 | Return 'user' as Table_qualifier and Table_owner from SQLTables (experimental) |
| 65536 | Read parameters from the client and odbc groups from `my.cnf'
|
| 131072 | Add some extra safety checks (should not bee needed but...) |
If you want to have many options, you should add the above flags! For example setting option to 12 (4+8) gives you debugging without package limits!
The default `MYODBC.DLL' is compiled for optimal performance. If
you want to debug MyODBC (for example to enable tracing),
you should instead use `MYODBCD.DLL'. To install this file, copy
`MYODBCD.DLL' over the installed `MYODBC.DLL' file.
MyODBC has been tested with Access, Admndemo.exe, C++-Builder,
Borland Builder 4, Centura Team Developer (formerly Gupta SQL/Windows),
ColdFusion (on Solaris and NT with svc pack 5), Crystal Reports,
DataJunction, Delphi, ERwin, Excel, iHTML, FileMaker Pro, FoxPro, Notes
4.5/4.6, SBSS, Perl DBD-ODBC, Paradox, Powerbuilder, Powerdesigner 32
bit, VC++, and Visual Basic.
If you know of any other applications that work with MyODBC, please
send mail to myodbc@lists.mysql.com about this!
With some programs you may get an error like:
Another user has modifies the record that you have modified. In most
cases this can be solved by doing one of the following things:
If the above doesn't help, you should do a MyODBC trace file and
try to figure out why things go wrong.
Most programs should work with MyODBC, but for each of those
listed here, we have tested it ourselves or received confirmation from
some user that it works:
Microsoft Data Access
Components) from http://www.microsoft.com/data/. This will fix
the following bug in Access: when you export data to MySQL, the
table and column names aren't specified. Another way to around this bug
is to upgrade to MyODBC Version 2.50.33 and MySQL Version
3.23.x, which together provide a workaround for this bug!
You should also get and apply the Microsoft Jet 4.0 Service Pack 5 (SP5)
which can be found here
http://support.microsoft.com/support/kb/articles/Q 239/1/14.ASP.
This will fix some cases where columns are marked as #deleted#
in Access.
Note that if you are using MySQL Version 3.22, you must to apply the
MDAC patch and use MyODBC 2.50.32 or 2.50.34 and above to go around
this problem.
Return matching rows. For Access 2.0, you should additionally enable
Simulate ODBC 1.0.
TIMESTAMP(14) or simple TIMESTAMP
is recommended instead of other TIMESTAMP(X) variations.
#DELETED#.
DOUBLE float fields. Access fails when comparing with
single floats. The symptom usually is that new or updated rows may show
up as #DELETED# or that you can't find or update rows.
BIGINT as
one of the column, then the results will be displayed as #DELETED. The
work around solution is:
TIMESTAMP as the data type, preferably
TIMESTAMP(14).
'Change BIGINT columns to INT' in connection options dialog in
ODBC DSN Administrator
#DELETED#, but newly
added/updated records will be displayed properly.
Another user has changed your data after
adding a TIMESTAMP column, the following trick may help you:
Don't use table data sheet view. Create instead a form with the
fields you want, and use that form data sheet view. You should
set the DefaultValue property for the TIMESTAMP column to
NOW(). It may be a good idea to hide the TIMESTAMP column
from view so your users are not confused.
"Query|SQLSpecific|Pass-Through" from the Access menu.
BLOB columns as OLE OBJECTS. If
you want to have MEMO columns instead, you should change the
column to TEXT with ALTER TABLE.
DATE columns properly. If you have a problem
with these, change the columns to DATETIME.
BYTE, Access will try
to export this as TINYINT instead of TINYINT UNSIGNED.
This will give you problems if you have values > 127 in the column!
MyODBC you need to put
attention in some default properties that aren't supported by the
MySQL server. For example, using the CursorLocation
Property as adUseServer will return for the RecordCount
Property a result of -1. To have the right value, you need to set this
property to adUseClient, like is showing in the VB code here:
Dim myconn As New ADODB.Connection Dim myrs As New Recordset Dim mySQL As String Dim myrows As Long myconn.Open "DSN=MyODBCsample" mySQL = "SELECT * from user" myrs.Source = mySQL Set myrs.ActiveConnection = myconn myrs.CursorLocation = adUseClient myrs.Open myrows = myrs.RecordCount myrs.Close myconn.CloseAnother workaround is to use a
SELECT COUNT(*) statement
for a similar query to get the correct row count.
Return matching rows.
Don't optimize column widths and Return matching rows.
Active or use the
method Open. Note that Active will start by automatically
issuing a SELECT * FROM ... query that may not be a good thing if
your tables are big!
MyODBC for MySQL data
sources. Allaire has verified that MyODBC Version 2.50.26
works with MySQL Version 3.22.27 and ColdFusion for Linux. (Any
newer version should also work.) You can download MyODBC at
http://www.mysql.com/downloads/api-myodbc.html
ColdFusion Version 4.5.1 allows you to us the ColdFusion Administrator
to add the MySQL data source. However, the driver is not
included with ColdFusion Version 4.5.1. Before the MySQL driver
will appear in the ODBC datasources drop-down list, you must build and
copy the MyODBC driver to
`/opt/coldfusion/lib/libmyodbc.so'.
The Contrib directory contains the program `mydsn-xxx.zip' which allows
you to build and remove the DSN registry file for the MyODBC driver
on Coldfusion applications.
VARCHAR rather than ENUM, as
it exports the latter in a manner that causes MySQL grief.
CONCAT() function. For example:
select CONCAT(rise_time), CONCAT(set_time)
from sunrise_sunset;
Values retrieved as strings this way should be correctly recognised
as time values by Excel97.
The purpose of CONCAT() in this example is to fool ODBC into thinking
the column is of ``string type''. Without the CONCAT(), ODBC knows the
column is of time type, and Excel does not understand that.
Note that this is a bug in Excel, because it automatically converts a
string to a time. This would be great if the source was a text file, but
is plain stupid when the source is an ODBC connection that reports
exact types for each column.
MyODBC driver and the Add-in Microsoft Query help.
For example, create a db with a table containing 2 columns of text:
mysql client command-line tool.
Don't optimize column width
option field when connecting to MySQL.
Also, here is some potentially useful Delphi code that sets up both an
ODBC entry and a BDE entry for MyODBC (the BDE entry requires a BDE
Alias Editor that is free at a Delphi Super Page near
you. (Thanks to Bryan Brunton bryan@flesherfab.com for this):
fReg:= TRegistry.Create;
fReg.OpenKey('\Software\ODBC\ODBC.INI\DocumentsFab', True);
fReg.WriteString('Database', 'Documents');
fReg.WriteString('Description', ' ');
fReg.WriteString('Driver', 'C:\WINNT\System32\myodbc.dll');
fReg.WriteString('Flag', '1');
fReg.WriteString('Password', '');
fReg.WriteString('Port', ' ');
fReg.WriteString('Server', 'xmark');
fReg.WriteString('User', 'winuser');
fReg.OpenKey('\Software\ODBC\ODBC.INI\ODBC Data Sources', True);
fReg.WriteString('DocumentsFab', 'MySQL');
fReg.CloseKey;
fReg.Free;
Memo1.Lines.Add('DATABASE NAME=');
Memo1.Lines.Add('USER NAME=');
Memo1.Lines.Add('ODBC DSN=DocumentsFab');
Memo1.Lines.Add('OPEN MODE=READ/WRITE');
Memo1.Lines.Add('BATCH COUNT=200');
Memo1.Lines.Add('LANGDRIVER=');
Memo1.Lines.Add('MAX ROWS=-1');
Memo1.Lines.Add('SCHEMA CACHE DIR=');
Memo1.Lines.Add('SCHEMA CACHE SIZE=8');
Memo1.Lines.Add('SCHEMA CACHE TIME=-1');
Memo1.Lines.Add('SQLPASSTHRU MODE=SHARED AUTOCOMMIT');
Memo1.Lines.Add('SQLQRYMODE=');
Memo1.Lines.Add('ENABLE SCHEMA CACHE=FALSE');
Memo1.Lines.Add('ENABLE BCD=FALSE');
Memo1.Lines.Add('ROWSET SIZE=20');
Memo1.Lines.Add('BLOBS TO CACHE=64');
Memo1.Lines.Add('BLOB SIZE=32');
AliasEditor.Add('DocumentsFab','MySQL',Memo1.Lines);
Return matching rows.
SHOW PROCESSLIST will not work properly. The fix is to set
the option OPTION=16384 in the ODBC connect string or to set
the Change BIGINT columns to INT option in the MyODBC connect screen.
You may also want to set the Return matching rows option.
[Microsoft][ODBC Driver Manager] Driver does
not support this parameter the reason may be that you have a
BIGINT in your result. Try setting the Change BIGINT
columns to INT option in the MyODBC connect screen.
Don't optimize column widths.
AUTO_INCREMENT Column in ODBC
A common problem is how to get the value of an automatically generated ID
from an INSERT. With ODBC, you can do something like this (assuming
that auto is an AUTO_INCREMENT field):
INSERT INTO foo (auto,text) VALUES(NULL,'text'); SELECT LAST_INSERT_ID();
Or, if you are just going to insert the ID into another table, you can do this:
INSERT INTO foo (auto,text) VALUES(NULL,'text'); INSERT INTO foo2 (id,text) VALUES(LAST_INSERT_ID(),'text');
See section 8.1.12.3 How Can I Get the Unique ID for the Last Inserted Row?.
For the benefit of some ODBC applications (at least Delphi and Access), the following query can be used to find a newly inserted row:
SELECT * FROM tbl_name WHERE auto IS NULL;
If you encounter difficulties with MyODBC, you should start by
making a log file from the ODBC manager (the log you get when requesting
logs from ODBCADMIN) and a MyODBC log.
To get a MyODBC log, you need to do the following:
MyODBC connect/configure
screen. The log will be written to file `C:\myodbc.log'.
If the trace option is not remembered when you are going back to the
above screen, it means that you are not using the myodbcd.dll
driver (see the item above).
Check the MyODBC trace file, to find out what could be wrong.
You should be able to find out the issued queries by searching after
the string >mysql_real_query in the `myodbc.log' file.
You should also try duplicating the queries in the mysql monitor
or admndemo to find out if the error is MyODBC or MySQL.
If you find out something is wrong, please only send the relevant rows (max 40 rows) to myodbc@lists.mysql.com. Please never send the whole MyODBC or ODBC log file!
If you are unable to find out what's wrong, the last option is to make an archive (tar or zip) that contains a MyODBC trace file, the ODBC log file, and a README file that explains the problem. You can send this to ftp://support.mysql.com/pub/mysql/secret/. Only we at MySQL AB will have access to the files you upload, and we will be very discrete with the data!
If you can create a program that also shows this problem, please upload this too!
If the program works with some other SQL server, you should make an ODBC log file where you do exactly the same thing in the other SQL server.
Remember that the more information you can supply to us, the more likely it is that we can fix the problem!
There are 2 supported JDBC drivers for MySQL:
MySQL Connector/J from MySQL AB, implemented in 100% native Java.
This product was formerly known as the mm.mysql driver.
You can download MySQL Connector/J from
http://www.mysql.com/products/connector-j/.
For documentation, consult any JDBC documentation, plus each driver's own documentation for MySQL-specific features.
PHP is a server-side, HTML-embedded scripting language that may be used to create dynamic web pages. It contains support for accessing several databases, including MySQL. PHP may be run as a separate program or compiled as a module for use with the Apache web server.
The distribution and documentation are available at the PHP web site (http://www.php.net/).
-lz last when linking
with -lmysqlclient.
This section documents the Perl DBI interface. The former interface
was called mysqlperl. DBI/DBD now is the
recommended Perl interface, so mysqlperl is obsolete and is not
documented here.
DBI with DBD::mysql
DBI is a generic interface for many databases. That means that
you can write a script that works with many different database engines
without change. You need a DataBase Driver (DBD) defined for each
database type. For MySQL, this driver is called
DBD::mysql.
For more information on the Perl5 DBI, please visit the DBI web
page and read the documentation:
http://dbi.perl.org/
For more information on Object Oriented Programming (OOP) as defined in Perl5, see the Perl OOP page:
http://language.perl.com/info/documentation.html
Note that if you want to use transactions with Perl, you need to have
DBD-mysql version 1.2216 or newer. Version 2.1022 or newer
is recommended.
Installation instructions for MySQL Perl support are given in section 2.7 Perl Installation Comments.
If you have the MySQL module installed, you can find information about specific MySQL functionality with one of the following command
shell>perldoc DBD/mysqlshell>perldoc mysql
DBI InterfacePortable DBI Methods
| Method | Description |
connect | Establishes a connection to a database server. |
disconnect | Disconnects from the database server. |
prepare | Prepares a SQL statement for execution. |
execute | Executes prepared statements. |
do | Prepares and executes a SQL statement. |
quote | Quotes string or BLOB values to be inserted.
|
fetchrow_array | Fetches the next row as an array of fields. |
fetchrow_arrayref | Fetches next row as a reference array of fields. |
fetchrow_hashref | Fetches next row as a reference to a hashtable. |
fetchall_arrayref | Fetches all data as an array of arrays. |
finish | Finishes a statement and lets the system free resources. |
rows | Returns the number of rows affected. |
data_sources | Returns an array of databases available on localhost. |
ChopBlanks | Controls whether fetchrow_* methods trim spaces.
|
NUM_OF_PARAMS | The number of placeholders in the prepared statement. |
NULLABLE | Which columns can be NULL.
|
trace | Perform tracing for debugging. |
MySQL-specific Methods
| Method | Description |
insertid | The latest AUTO_INCREMENT value.
|
is_blob | Which columns are BLOB values.
|
is_key | Which columns are keys. |
is_num | Which columns are numeric. |
is_pri_key | Which columns are primary keys. |
is_not_null | Which columns CANNOT be NULL. See NULLABLE.
|
length | Maximum possible column sizes. |
max_length | Maximum column sizes actually present in result. |
NAME | Column names. |
NUM_OF_FIELDS | Number of fields returned. |
table | Table names in returned set. |
type | All column types. |
The Perl methods are described in more detail in the following sections. Variables used for method return values have these meanings:
$dbh
$sth
$rc
$rv
Portable DBI Methods
connect($data_source, $username, $password)
connect method to make a database connection to the data
source. The $data_source value should begin with
DBI:driver_name:.
Example uses of connect with the DBD::mysql driver:
$dbh = DBI->connect("DBI:mysql:$database", $user, $password);
$dbh = DBI->connect("DBI:mysql:$database:$hostname",
$user, $password);
$dbh = DBI->connect("DBI:mysql:$database:$hostname:$port",
$user, $password);
If the user name and/or password are undefined, DBI uses the
values of the DBI_USER and DBI_PASS environment variables,
respectively. If you don't specify a hostname, it defaults to
'localhost'. If you don't specify a port number, it defaults to the
default MySQL port (3306).
As of Msql-Mysql-modules Version 1.2009,
the $data_source value allows certain modifiers:
mysql_read_default_file=file_name
mysql_read_default_group=group_name
[client] group. By specifying the mysql_read_default_group
option, the default group becomes the [group_name] group.
mysql_compression=1
mysql_socket=/path/to/socket
DBI script, you can take them from the user's `~/.my.cnf'
option file instead by writing your connect call like this:
$dbh = DBI->connect("DBI:mysql:$database"
. ";mysql_read_default_file=$ENV{HOME}/.my.cnf",
$user, $password);
This call will read options defined for the [client] group in the
option file. If you wanted to do the same thing but use options specified
for the [perl] group as well, you could use this:
$dbh = DBI->connect("DBI:mysql:$database"
. ";mysql_read_default_file=$ENV{HOME}/.my.cnf"
. ";mysql_read_default_group=perl",
$user, $password);
disconnect
disconnect method disconnects the database handle from the database.
This is typically called right before you exit from the program.
Example:
$rc = $dbh->disconnect;
prepare($statement)
($sth), which you can use to invoke
the execute method.
Typically you handle SELECT statements (and SELECT-like
statements such as SHOW, DESCRIBE, and EXPLAIN) by
means of prepare and execute. Example:
$sth = $dbh->prepare($statement)
or die "Can't prepare $statement: $dbh->errstr\n";
If you want to read big results to your client you can tell Perl to use
mysql_use_result() with:
my $sth = $dbh->prepare($statement { "mysql_use_result" => 1});
execute
execute method executes a prepared statement. For
non-SELECT statements, execute returns the number of rows
affected. If no rows are affected, execute returns "0E0",
which Perl treats as zero but regards as true. If an error occurs,
execute returns undef. For SELECT statements,
execute only starts the SQL query in the database; you need to use one
of the fetch_* methods described here to retrieve the data.
Example:
$rv = $sth->execute
or die "can't execute the query: $sth->errstr;
do($statement)
do method prepares and executes a SQL statement and returns the
number of rows affected. If no rows are affected, do returns
"0E0", which Perl treats as zero but regards as true. This method is
generally used for non-SELECT statements that cannot be prepared in
advance (due to driver limitations) or that do not need to be executed more
than once (inserts, deletes, etc.). Example:
$rv = $dbh->do($statement)
or die "Can't execute $statement: $dbh- >errstr\n";
Generally the 'do' statement is much faster (and is preferable)
than prepare/execute for statements that don't contain parameters.
quote($string)
quote method is used to "escape" any special characters contained in
the string and to add the required outer quotation marks.
Example:
$sql = $dbh->quote($string)
fetchrow_array
while(@row = $sth->fetchrow_array) {
print qw($row[0]\t$row[1]\t$row[2]\n);
}
fetchrow_arrayref
while($row_ref = $sth->fetchrow_arrayref) {
print qw($row_ref->[0]\t$row_ref->[1]\t$row_ref->[2]\n);
}
fetchrow_hashref
while($hash_ref = $sth->fetchrow_hashref) {
print qw($hash_ref->{firstname}\t$hash_ref->{lastname}\t\
$hash_ref->{title}\n);
}
fetchall_arrayref
my $table = $sth->fetchall_arrayref
or die "$sth->errstr\n";
my($i, $j);
for $i ( 0 .. $#{$table} ) {
for $j ( 0 .. $#{$table->[$i]} ) {
print "$table->[$i][$j]\t";
}
print "\n";
}
finish
$rc = $sth->finish;
rows
SELECT execute
statement. Example:
$rv = $sth->rows;
NULLABLE
NULL values.
The possible values for each array element are 0 or the empty string if the
column cannot be NULL, 1 if it can, and 2 if the column's NULL
status is unknown.
Example:
$null_possible = $sth->{NULLABLE};
NUM_OF_FIELDS
SELECT or SHOW FIELDS
statement. You may use this for checking whether a statement returned a
result: A zero value indicates a non-SELECT statement like
INSERT, DELETE, or UPDATE.
Example:
$nr_of_fields = $sth->{NUM_OF_FIELDS};
data_sources($driver_name)
'localhost'.
Example:
@dbs = DBI->data_sources("mysql");
ChopBlanks
fetchrow_* methods will chop
leading and trailing blanks from the returned values.
Example:
$sth->{'ChopBlanks'} =1;
trace($trace_level)
trace($trace_level, $trace_filename)
trace method enables or disables tracing. When invoked as a
DBI class method, it affects tracing for all handles. When invoked as
a database or statement handle method, it affects tracing for the given
handle (and any future children of the handle). Setting $trace_level
to 2 provides detailed trace information. Setting $trace_level to 0
disables tracing. Trace output goes to the standard error output by
default. If $trace_filename is specified, the file is opened in
append mode and output for all traced handles is written to that
file. Example:
DBI->trace(2); # trace everything
DBI->trace(2,"/tmp/dbi.out"); # trace everything to
# /tmp/dbi.out
$dth->trace(2); # trace this database handle
$sth->trace(2); # trace this statement handle
You can also enable DBI tracing by setting the DBI_TRACE
environment variable. Setting it to a numeric value is equivalent to calling
DBI->(value). Setting it to a pathname is equivalent to calling
DBI->(2,value).
MySQL-specific Methods
The methods shown here are MySQL-specific and not part of the
DBI standard. Several of them are now deprecated:
is_blob, is_key, is_num, is_pri_key,
is_not_null, length, max_length, and table.
Where DBI-standard alternatives exist, they are noted here:
insertid
AUTO_INCREMENT feature of MySQL, the new
auto-incremented values will be stored here.
Example:
$new_id = $sth->{insertid};
As an alternative, you can use $dbh->{'mysql_insertid'}.
is_blob
BLOB.
Example:
$keys = $sth->{is_blob};
is_key
$keys = $sth->{is_key};
is_num
$nums = $sth->{is_num};
is_pri_key
$pri_keys = $sth->{is_pri_key};
is_not_null
NULL
values.
Example:
$not_nulls = $sth->{is_not_null};
is_not_null is deprecated; it is preferable to use the
NULLABLE attribute (described above), because that is a DBI standard.
length
max_length
length array indicates the maximum possible sizes that each column may
be (as declared in the table description). The max_length array
indicates the maximum sizes actually present in the result table. Example:
$lengths = $sth->{length};
$max_lengths = $sth->{max_length};
NAME
$names = $sth->{NAME};
table
$tables = $sth->{table};
type
$types = $sth->{type};
DBI/DBD Information
You can use the perldoc command to get more information about
DBI.
perldoc DBI perldoc DBI::FAQ perldoc DBD::mysql
You can also use the pod2man, pod2html, etc., tools to
translate to other formats.
You can find the latest DBI information at
the DBI web page: http://dbi.perl.org/.
MySQL Connector/C++ (or MySQL++) is the official MySQL API for C++. More
information can be found at http://www.mysql.com/products/mysql++/.
You can compile the MySQL Windows source with Borland C++ 5.02. (The Windows source includes only projects for Microsoft VC++, for Borland C++ you have to do the project files yourself.)
One known problem with Borland C++ is that it uses a different structure
alignment than VC++. This means that you will run into problems if you
try to use the default libmysql.dll libraries (that was compiled
with VC++) with Borland C++. You can do one of the following to avoid
this problem.
mysql_init() with NULL as an argument, not a
pre-allocated MYSQL struct.
MySQLdb provides MySQL support for Python, compliant with the
Python DB API version 2.0. It can be found at
http://sourceforge.net/projects/mysql-python/.
MySQLtcl is a simple API for accessing a MySQL database server from the Tcl programming language. It can be found at http://www.xdobry.de/mysqltcl/.
Eiffel MySQL is an interface to the MySQL database server using the Eiffel programming language, written by Michael Ravits. It can be found at http://efsa.sourceforge.net/archive/ravits/mysql.htm.
In release 4.1 MySQL introduces spatial extensions, which allow generating, storing and analysing of geographic features.
A geographic feature is anything in the world that has a location.
A feature can be:
You can also find documents which use term geospatial feature to
refer to geographic features.
Geometry is another word that denotes a geographic feature.
The original meaning of the word geometry denotes a branch of
mathematics.
Another meaning that comes from cartography, referring to the geometric
features that cartographers use to map the world.
We will mean the same thing using all these terms,
a geographic feature, or a geospatial feature,
or a feature, or a geometry,
with geometry as the most used in this documentation.
Let's define a geometry as a point or an aggregate of
points representing anything in the world that has a location.
MySQL implements spatial extensions following OpenGIS specifications.
The OpenGIS Consortium (OGC), is an international consortium
of more than 250 companies, agencies, universities participating
in the development of publicly available conceptual solutions that can be
useful with all kinds of applications that manage spatial data.
See http://www.opengis.org/.
In 1997, the OpenGIS Consortium published the
OpenGIS (r) Simple Features Specifications For SQL, which proposes
several conceptual ways for extending an SQL RDBMS to support spatial
data. MySQL implements a subset of the SQL with Geometry Types
environment proposed by OGC.
This term refers to an SQL environment that has been extended with a
set of geometry types. A geometry-valued SQL column is implemented as
a column of a geometry type. The specifications describe a set of SQL
geometry types, as well as functions on those types to create and
analyse geometry values.
The set of geometry types, proposed by OGC's SQL with Geometry Types
environment, is based of OpenGIS Geometry Model. In this model,
each geometric object:
Point
LineString
Polygon
GeometryCollection
MultiPoint
MultiLineString
MultiPolygon
Geometry is the base class. It's an abstract (non-instantiable) class. The instantiable subclasses of Geometry are restricted to zero, one, and two-dimensional geometric objects that exist in two-dimensional coordinate space. All instantiable geometry classes are defined so that valid instances of a geometry class are topologically closed (i.e. all defined geometries include their boundary).
The base Geometry class has subclasses for Point,
Curve, Surface and GeometryCollection.
Curve stands for 1-dimensional objects, and has subclass
LineString, with sub-subclasses Line and LinearRing.
Surface is designed for two-dimensional objects and
has subclass Polygon.
GeometryCollection
has specialised 0, 1 and two-dimensional collection classes named
MultiPoint, MultiLineString and MultiPolygon
for modelling geometries corresponding to collections of
Points, LineStrings and Polygons respectively.
MultiCurve and MultiSurface are introduced as abstract superclasses
that generalise the collection interfaces to handle Curves and Surfaces.
Geometry, Curve, Surface, MultiCurve
and MultiSurface are defined as non-instantiable classes,
it is not possible to create an object of these
classes. They define a common set of methods for its subclasses and
included for the reason of extensibility.
Point, LineString, Polygon, GeometryCollection, MultiPoint, MultiLineString, MultiPolygon are instantiable classes (marked bold in the hierarchy tree).
Geometry is the root class of the hierarchy. Each geometry is described by a number of its properties. Particular subclasses of the root class Geometry have their own specific properties. Properties, which are common for all geometry subclasses, are described in the list below. Geometry is a non-instantiable class.
type that a geometry belongs to.
Each geometry belongs to one of instantiable classes in the hierarchy.
SRID, the identifier of a geometry's associated Spatial Reference System
which describes the coordinate space in which the geometry
object is defined.
coordinates in its Spatial Reference System,
represented as double precision (8 byte) numbers. All non-empty geometries
include at least one pair of (X,Y) coordinates. Empty geometries contain
no coordinates.
interior, boundary and exterior.
All geometries occupy some position in space. The exterior of
a geometry is all space not occupied by the geometry. The interior
is the space occupied by the geometry. The boundary is the
interface between geometry's interior and exterior.
MBR, or Envelope, the geometry's Minimum Bounding Rectangle.
This is the bounding geometry, formed by the minimum and maximum (X,Y)
coordinates
:
((MINX MINY, MAXX MINY, MAXX MAXY, MINX MAXY, MINX MINY))
simple or non-simple.
Geometry values of some types (LineString, MultyPoint, MultiLineString)
are either simple of non-simple. Each type determines its own assertions
for being simple or non-simple.
closed or not closed.
Geometry values of some types (LineString, MultiString) are either closed
or not closed. Each type determines its own assertions for being closed
or not closed.
empty or not empty
A geometry is empty if it does not have any points.
Exterior, interior and boundary of an empty geometry
are not defined, i.e., they are represented by a NULL value.
An empty geometry is defined to be always simple.
An empty geometry has an area of 0.
dimension A geometry can have a dimension of -1, 0, 1 or 2.
planar coordinate system and the distance on the geocentric
system (coordinates on the Earth's surface) are different things.
A Point is a geometry that represents a single
location in coordinate space.
A Curve is a one-dimensional geometry, usually represented by a sequence
of points. Particular subclasses of Curve specify the form of the interpolation
between points. Curve is a non-instantiable class.
A LineString is a Curve with linear interpolation between points.
A Surface is a two-dimensional geometric object.
The only instantiable subclass of Surface defined in OpenGIS specification, is Polygon.
A Polygon is a planar Surface representing a multisided geometry, defined by one exterior boundary and zero or more interior boundaries. Each interior boundary defines a hole in the Polygon.
The assertions for polygons (the rules that define valid polygons) are:
In the above assertions, polygons are simple geometries.
A GeometryCollection is a geometry that is a collection of one or more
geometries of any class.
All the elements in a GeometryCollection must be in the same Spatial Reference (i.e. in the same coordinate system). GeometryCollection places no other constraints on its elements.
Subclasses of GeometryCollection described below may restrict membership based on:
A MultiPoint is a collection whose elements are
restricted to Points. The points are not connected or ordered in any way.
A MultiCurve is a geometry collection whose elements are Curves. MultiCurve is a non-instantiable class.
A MultiLineString is a MultiCurve whose elements are LineStrings.
A MultiSurface is a geometric collection whose elements are surfaces. MultiSurface is a non-instantiable class.
The only instantiable subclass of MultiSurface is MultiPolygon.
A MultiPolygon is a MultiSurface whose elements are Polygons.
This section describes the standard spatial data formats that are used to store geometry objects.
They are:
The Well-Known Text (WKT) representation of Geometry is designed to exchange geometry data in ASCII form.
Examples of WKT representations of geometry objects are:
POINT(10 10)
LINESTRING(10 10, 20 20, 30 40)
POLYGON((10 10, 10 20, 20 20, 20 15, 10 10))
MULTIPOINT(10 10, 20 20)
MULTILINESTRING((10 10, 20 20), (15 15, 30 15))
MULTIPOLYGON(((10 10, 10 20, 20 20, 20 15, 10 10)), ((60 60, 70 7, 80 60, 60 60 )))
GEOMETRYCOLLECTION(POINT(10 10), POINT(30 30), LINESTRING(15 15, 20 20))
The text representation of the implemented instantiable geometric types conforms to this grammar:
<Geometry Tagged Text> :=
<Point Tagged Text>
| <LineString Tagged Text>
| <Polygon Tagged Text>
| <MultiPoint Tagged Text>
| <MultiLineString Tagged Text>
| <MultiPolygon Tagged Text>
| <GeometryCollection Tagged Text>
<Point Tagged Text> := POINT <Point Text>
<LineString Tagged Text> := LINESTRING <LineString Text>
<Polygon Tagged Text> := POLYGON <Polygon Text>
<MultiPoint Tagged Text> := MULTIPOINT <Multipoint Text>
<MultiLineString Tagged Text> := MULTILINESTRING <MultiLineString Text>
<MultiPolygon Tagged Text> := MULTIPOLYGON <MultiPolygon Text>
<GeometryCollection Tagged Text> := GEOMETRYCOLLECTION <GeometryCollection Text>
<Point Text> := EMPTY | ( <Point> )
<Point> := <x> <y>
<x> := double precision literal
<y> := double precision literal
<LineString Text> := EMPTY | ( <Point > {, <Point > }* )
<Polygon Text> := EMPTY | ( <LineString Text > {, < LineString Text > }*)
<Multipoint Text> := EMPTY | ( <Point Text > {, <Point Text > }* )
<MultiLineString Text> := EMPTY | ( <LineString Text > {, < LineString Text > }* )
<MultiPolygon Text> := EMPTY | ( < Polygon Text > {, < Polygon Text > }* )
<GeometryCollection Text> := EMPTY | ( <Geometry Tagged Text> {, <Geometry Tagged Text> }* )
Well-Known Binary (WKB) representation is defined by the OpenGIS specifications. It's also defined in the ISO "SQL/MM Part 3: Spatial" standard.
WKB is used to exchange geometry data as binary streams represented by BLOB values containing geometic information, according to the structures described below.
WKB uses the following basic type definitions:
// byte : 8-bit unsigned integer (1 byte)
// uint32 : 32-bit unsigned integer (4 bytes)
// double : double precision number (8 bytes)
enum wkbGeometryType
{
wkbPoint = 1,
wkbLineString = 2,
wkbPolygon = 3,
wkbMultiPoint = 4,
wkbMultiLineString = 5,
wkbMultiPolygon = 6,
wkbGeometryCollection = 7
}
enum wkbByteOrder
{
wkbXDR = 0, // Big Endian
wkbNDR = 1 // Little Endian
}
// Building Blocks : Point, LinearRing
Point
{
double x;
double y;
}
LinearRing
{
uint32 numPoints;
Point points[numPoints];
}
WKBPoint
{
byte byteOrder;
uint32 wkbType; // 1
Point point;
}
WKBLineString
{
byte byteOrder;
uint32 wkbType; // 2
uint32 numPoints;
Point points[numPoints];
}
WKBPolygon
{
byte byteOrder;
uint32 wkbType; // 3
uint32 numRings;
LinearRing rings[numRings];
}
WKBMultiPoint
{
byte byteOrder;
uint32 wkbType; // 4
uint32 num_wkbPoints;
WKBPoint WKBPoints[num_wkbPoints];
}
WKBMultiLineString
{
byte byteOrder;
uint32 wkbType; // 5
uint32 num_wkbLineStrings;
WKBLineString WKBLineStrings[num_wkbLineStrings];
}
wkbMultiPolygon
{
byte byteOrder;
uint32 wkbType; // 6
uint32 num_wkbPolygons;
WKBPolygon wkbPolygons[num_wkbPolygons];
}
WKBGeometry
{
union
{
WKBPoint point;
WKBLineString linestring;
WKBPolygon polygon;
WKBGeometryCollection collection;
WKBMultiPoint mpoint;
WKBMultiLineString mlinestring;
WKBMultiPolygon mpolygon;
}
}
WKBGeometryCollection
{
byte byte_order;
uint32 wkbType; // 7
uint32 num_wkbGeometries;
WKBGeometry wkbGeometries[num_wkbGeometries];
}
A WKB which corresponds to POINT(1,1) looks like this sequence of 21 bytes:
0101000000000000000000F03F000000000000F03F
Where, consequently,
Byte order : 01 WKB type : 01000000 X : 000000000000F03F Y : 000000000000F03F
MySQL provides a hierarchy of datatypes, corresponding to the OpenGIS Geometry Model.
GEOMETRY
POINT
LINESTRING
POLYGON
MULTIPOINT
MULTILINESTRING
MULTIPOLYGON
GEOMETRYCOLLECTION
The GEOMETRY type can store geometries of any type,
other types restrict their values to a partilcular geometry type.
GEOMETRYCOLLECTION can store a collection of objects
of any type, other collection types restrict the type of collection
members to a particular geometry type.
MySQL provides a number of function which take a Well-Known Text representation and, optionally, a spatial reference system identifier as input parameters, and return the corresponding geometry.
GeomFromText() accepts a WKT of any geometry
type as its first argument.
For construction of geometry values restricted to a particular type, an implementation also provides a type-specific construction function for each geometry type.
GeomFromText(wkt,srid)
GeometryFromText(wkt,srid)
PointFromText(wkt,srid)
LineFromText(wkt,srid)
LineStringFromText(wkt,srid)
PolyFromText(wkt,srid)
PolygonFromText(wkt,srid)
MPointFromText(wkt,srid)
MultiPointFromText(wkt,srid)
MLineFromText(wkt,srid)
MultiLineStringFromText(wkt,srid)
MPolyFromText(wkt,srid)
MultiPolygonFromText(wkt,srid)
GeomCollFromText(wkt,srid)
GeometryCollectionFromText(wkt,srid)
As an optional feature, an implementation may also support building of Polygon or MultiPolygon values, given an arbitrary collection of possibly intersecting rings or closed LineString values. Implementations that support this feature should include the following functions (Note: MySQL does not yet implement these):
BdPolyFromText(multiLineStringTaggedText String, SRID Integer):Polygon
BdMPolyFromText(multiLineStringTaggedText String, SRID Integer):MultiPolygon
MySQL provides a set of functions which take a BLOB containing Well-Known Binary representation and, optionally, a spatial reference system identifier (SRID) as their input parameters, and return the corresponding geometry.
GeomFromWKB can accept a WKB of any geometry
type as its first argument. For construction of geometry values
restricted to a particular type, an implementation also provides
a specific construction function for each type of geometry as described
in the list above.
GeomFromWKB(wkb,srid)
GeometryFromWKB(wkt,srid)
PointFromWKB(wkb,srid)
LineFromWKB(wkb,srid)
LineStringFromWKB(wkb,srid)
PolyFromWKB(wkb,srid)
PolygonFromWKB(wkb,srid)
MPointFromWKB(wkb,srid)
MultiPointFromWKB(wkb,srid)
MLineFromWKB(wkb,srid)
MultiLineStringFromWKB(wkb,srid)
MPolyFromWKB(wkb,srid)
MultiPolygonFromWKB(wkb,srid)
GeomCollFromWKB(wkb,srid)
GeometryCollectionFromWKB(wkt,srid)
As an optional feature, an implementation may also support the building' of Polygon or MultiPolygon values given an arbitrary collection of possibly intersecting rings or closed LineString values. Implementations that support this feature should include the following functions (Note: MySQL does not yet implement these):
BdPolyFromWKB(WKBMultiLineString Binary,SRID Integer): Polygon
BdMPolyFromWKB(WKBMultiLineString Binary, SRID Integer):MultiPolygon
Note: the functions listed in this section are not yet implemented in the current version.
MySQL provides a set of useful functions for creating geometry WKB
representations. The functions described in this section are MySQL
extensions to the OpenGIS specifications. The results of these
functions are BLOBs containing geometry WKB representations.
The results of these functions can be substituted as first argument
for GeomFromWKB() function family.
Point(x,y)
MultiPoint(WKBPoint,WKBPoint,...,WKBPoint)
LineString(WKBPoint,WKBPoint,...,WKBPoint)
MultiLineString(WKBLineString,WKBLineString,...,WKBLineString)
Polygon(WKBLineString,WKBLineString,...,WKBLineString)
MultiPolygon(WKBPolygon,WKBPolygon,...,WKBPolygon)
GeometryCollection(WKBGeometry,WKBGeometry,..,WKBGeometry)
MySQL provides a standard way of creating spatial columns for geometry types.
CREATE TABLE
mysql> CREATE TABLE g1 (p1 GEOMETRY); Query OK, 0 rows affected (0.02 sec) mysql>
ALTER TABLE
mysql> ALTER TABLE g1 ADD p2 POINT; Query OK, 0 rows affected (0.00 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql>
After you have created spatial columns, you can populate them with your spatial data.
To populate spatially enabled columns, MySQL supports two spatial formats (described previously), Well Known Text (WKT) and Well-Known Binary (WKB) representation.
INSERT INTO geom VALUES (GeomFromText('POINT(1 1)'))
INSERT INTO geom VALUES (GeomFromText('LINESTRING(0 0,1 1,2 2)'))
INSERT INTO geom VALUES (GeomFromText('POLYGON((0 0,10 0,10 10,0 10,0 0),(5 5,7 5,7 7,5 7, 5 5))'))
INSERT INTO geom VALUES (GeomFromText('GEOMETRYCOLLECTION(POINT(1 1),LINESTRING(0 0,1 1,2 2,3 3,4 4))'))
INSERT INTO geom VALUES (PointFromText('POINT(1 1)'))
INSERT INTO geom VALUES (LineStringFromText('LINESTRING(0 0,1 1,2 2)'))
INSERT INTO geom VALUES (PolygomFromText('POLYGON((0 0,10 0,10 10,0 10,0 0),(5 5,7 5,7 7,5 7, 5 5))'))
INSERT INTO geom VALUES (GeomCollFromText('GEOMETRYCOLLECTION(POINT(1 1),LINESTRING(0 0,1 1,2 2,3 3,4 4))'))
Note, a client application program which wants to use WKB representation of geometry values, is responsible for sending correctly formed WKB in queries to server.
INSERT INTO geom VALUES (GeomFromWKB(0x0101000000000000000000F03F000000000000F03F));
INSERT INTO geom VALUES (GeomFromWKB(?));
mysql_escape_string() in libmysqlclient applications.
INSERT INTO geom VALUES (GeomFromWKB('\0\0\0\0\0\0\0\0\0ð?\0\0\0\0\0\0ð?'));
Geometry values, previously stored in a table, can be fetched in either WKT or WKB representation.
The AsText() function provides textual access to geometry values
by converting them into a WKT string.
mysql> SELECT AsText(p1) FROM g1; +-------------------------+ | AsText(p1) | +-------------------------+ | POINT(1 1) | | LINESTRING(0 0,1 1,2 2) | +-------------------------+ 2 rows in set (0.00 sec)
The AsBinary() and AsWKB() functions provide binary access to geometry
values by converting them into a BLOB containing WKB.
SELECT AsBinary(g) FROM geom; SELECT AsWKB(g) FROM geom;
AsBinary() and AsWKB() return a BLOB with a geometry in its WKB
representation.
After populating spatial columns with values, you are ready to query and analyse them. Spatial analysis can be performed using spatial functions in:
mysql or MySQLCC.
MySQL provides a set of functions to perform various operations on spatial data. These functions can be labeled into four major groups according to the type of operation they perform:
As discussed (see section 9.4.1 MySQL Spatial Data Types, see section 9.4.4 Populating Spatial Columns), MySQL understands Well-Known Text (WKT) and Well-Known Binary (WKB) geometry representations through support of these functions:
GeomFromWKT(string wkt [,integer srid]): geometry
GeomFromWKB(binary wkb [,integer srid]): geometry
AsWKT(geometry g): string
AsWKB(geometry g): binary
mysql> SELECT AsText(GeomFromText('LineString(1 1,2 2,3 3)'));
+-------------------------------------------------+
| AsText(GeomFromText('LineString(1 1,2 2,3 3)')) |
+-------------------------------------------------+
| LINESTRING(1 1,2 2,3 3) |
+-------------------------------------------------+
Functions that belong to this group take a geometry value as their argument and return some quantitive or qualitive property of this geometry. Some functions restrict their argument type.
These functions don't restrict their argument and accept a geometry of any type.
GeometryType(geometry g):string
mysql> SELECT GeometryType(GeomFromText('POINT(1 1)'));
+------------------------------------------+
| GeometryType(GeomFromText('POINT(1 1)')) |
+------------------------------------------+
| POINT |
+------------------------------------------+
Dimension(geometry g):integer
mysql> SELECT Dimension(GeomFromText('LineString(1 1,2 2)'));
+------------------------------------------------+
| Dimension(GeomFromText('LineString(1 1,2 2)')) |
+------------------------------------------------+
| 1 |
+------------------------------------------------+
SRID(geometry g):integer
mysql> SELECT SRID(GeomFromText('LineString(1 1,2 2)',101));
+-----------------------------------------------+
| SRID(GeomFromText('LineString(1 1,2 2)',101)) |
+-----------------------------------------------+
| 101 |
+-----------------------------------------------+
Envelope(geometry g):geometry
POLYGON((MINX MINY, MAXX MINY, MAXX MAXY, MINX MAXY, MINX MINY))
mysql> SELECT AsText(Envelope(GeomFromText('LineString(1 1,2 2)',101)));
+-----------------------------------------------------------+
| AsText(Envelope(GeomFromText('LineString(1 1,2 2)',101))) |
+-----------------------------------------------------------+
| POLYGON((1 1,2 1,2 2,1 2,1 1)) |
+-----------------------------------------------------------+
Note: MySQL does not yet implement the following functions:
Boundary(g:Geometry):Geometry
IsEmpty(geometry g):Integer
IsSimple(geometry g):Integer
X(point p):Double
mysql> SELECT X(GeomFromText('Point(56.7 53.34)',101));
+------------------------------------------+
| X(GeomFromText('Point(56.7 53.34)',101)) |
+------------------------------------------+
| 56.7 |
+------------------------------------------+
Y(point p):Double
mysql> SELECT Y(GeomFromText('Point(56.7 53.34)',101));
+------------------------------------------+
| Y(GeomFromText('Point(56.7 53.34)',101)) |
+------------------------------------------+
| 53.34 |
+------------------------------------------+
StartPoint(LineString l):Point
mysql> SELECT AsText(StartPoint(GeomFromText('LineString(1 1,2 2,3 3)')));
+-------------------------------------------------------------+
| AsText(StartPoint(GeomFromText('LineString(1 1,2 2,3 3)'))) |
+-------------------------------------------------------------+
| POINT(1 1) |
+-------------------------------------------------------------+
EndPoint(LineString l):Point
mysql> SELECT AsText(EndPoint(GeomFromText('LineString(1 1,2 2,3 3)')));
+------------------------------------------------------------+
| AsText(EndPoint(GeomFromText('LineString(1 1,2 2,3 3)'))) |
+------------------------------------------------------------+
| POINT(3 3) |
+------------------------------------------------------------+
PointN(LineString l,integer n):Point
mysql> SELECT AsText(PointN(GeomFromText('LineString(1 1,2 2,3 3)'),2));
+-----------------------------------------------------------+
| AsText(PointN(GeomFromText('LineString(1 1,2 2,3 3)'),2)) |
+-----------------------------------------------------------+
| POINT(2 2) |
+-----------------------------------------------------------+
GLength(LineString l):Double
mysql> SELECT GLength(GeomFromText('LineString(1 1,2 2,3 3)'));
+--------------------------------------------------+
| GLength(GeomFromText('LineString(1 1,2 2,3 3)')) |
+--------------------------------------------------+
| 2.8284271247462 |
+--------------------------------------------------+
NumPoints(LineString l):Integer
mysql> SELECT NumPoints(GeomFromText('LineString(1 1,2 2,3 3)'));
+----------------------------------------------------+
| NumPoints(GeomFromText('LineString(1 1,2 2,3 3)')) |
+----------------------------------------------------+
| 3 |
+----------------------------------------------------+
Note: MySQL does not yet implement the following functions:
IsRing(LineString l):Integer
IsClosed(LineString l):Integer
mysql> SELECT IsClosed(GeomFromText('LineString(1 1,2 2,3 3)'));
+---------------------------------------------------+
| IsClosed(GeomFromText('LineString(1 1,2 2,3 3)')) |
+---------------------------------------------------+
| 0 |
+---------------------------------------------------+
GLength(MultiLineString m):Double
mysql> SELECT GLength(GeomFromText('MultiLineString((1 1,2 2,3 3),(4 4,5 5))'));
+-------------------------------------------------------------------+
| GLength(GeomFromText('MultiLineString((1 1,2 2,3 3),(4 4,5 5))')) |
+-------------------------------------------------------------------+
| 4.2426406871193 |
+-------------------------------------------------------------------+
IsClosed(MultiLineString m):Integer
mysql> SELECT IsClosed(GeomFromText('MultiLineString((1 1,2 2,3 3),(4 4,5 5))'));
+--------------------------------------------------------------------+
| IsClosed(GeomFromText('MultiLineString((1 1,2 2,3 3),(4 4,5 5))')) |
+--------------------------------------------------------------------+
| 0 |
+--------------------------------------------------------------------+
Area(Polygon p):Double
mysql> SELECT Area(GeomFromText('Polygon((0 0,0 3,3 3,3 0,0 0),(1 1,1 2,2 2,2 1,1 1))'));
+----------------------------------------------------------------------------+
| Area(GeomFromText('Polygon((0 0,0 3,3 3,3 0,0 0),(1 1,1 2,2 2,2 1,1 1))')) |
+----------------------------------------------------------------------------+
| 8 |
+----------------------------------------------------------------------------+
NumInteriorRings(Polygon p):Integer
mysql> SELECT NumInteriorRings(GeomFromText('Polygon((0 0,0 3,3 3,3 0,0 0),(1 1,1 2,2 2,2 1,1 1))'));
+----------------------------------------------------------------------------------------+
| NumInteriorRings(GeomFromText('Polygon((0 0,0 3,3 3,3 0,0 0),(1 1,1 2,2 2,2 1,1 1))')) |
+----------------------------------------------------------------------------------------+
| 1 |
+----------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
ExteriorRing(Polygon p):LineString
mysql> SELECT AsText(ExteriorRing(GeomFromText('Polygon((0 0,0 3,3 3,3 0,0 0),(1 1,1 2,2 2,2 1,1 1))')));
+--------------------------------------------------------------------------------------------+
| AsText(ExteriorRing(GeomFromText('Polygon((0 0,0 3,3 3,3 0,0 0),(1 1,1 2,2 2,2 1,1 1))'))) |
+--------------------------------------------------------------------------------------------+
| LINESTRING(0 0,0 3,3 3,3 0,0 0) |
+--------------------------------------------------------------------------------------------+
InteriorRingN(Polygon p, Integer N):LineString
mysql> SELECT AsText(InteriorRingN(GeomFromText('Polygon((0 0,0 3,3 3,3 0,0 0),(1 1,1 2,2 2,2 1,1 1))'),1));
+-----------------------------------------------------------------------------------------------+
| AsText(InteriorRingN(GeomFromText('Polygon((0 0,0 3,3 3,3 0,0 0),(1 1,1 2,2 2,2 1,1 1))'),1)) |
+-----------------------------------------------------------------------------------------------+
| LINESTRING(1 1,1 2,2 2,2 1,1 1) |
+-----------------------------------------------------------------------------------------------+
Note: MySQL does not yet implement the following functions:
Centroid(Polygon p):Point
PointOnSurface(p:Polygon):Point
Area(MultiPolygon m):Double
mysql> SELECT Area(GeomFromText('MultiPolygon(((0 0,0 3,3 3,3 0,0 0),(1 1,1 2,2 2,2 1,1 1)))'));
+-----------------------------------------------------------------------------------+
| Area(GeomFromText('MultiPolygon(((0 0,0 3,3 3,3 0,0 0),(1 1,1 2,2 2,2 1,1 1)))')) |
+-----------------------------------------------------------------------------------+
| 8 |
+-----------------------------------------------------------------------------------+
Note: MySQL does not yet implement the following functions:
Centroid(MultyPolygon p):Point
PointOnSurface(MultiPolygon m):Point
NumGeometries(GeometryCollection g):Integer
mysql> SELECT NumGeometries(GeomFromText('GeometryCollection(Point(1 1),LineString(2 2, 3 3))'));
+------------------------------------------------------------------------------------+
| NumGeometries(GeomFromText('GeometryCollection(Point(1 1),LineString(2 2, 3 3))')) |
+------------------------------------------------------------------------------------+
| 2 |
+------------------------------------------------------------------------------------+
GeometryN(GeometryCollection g,integer N):Geometry
mysql> SELECT AsText(GeometryN(GeomFromText('GeometryCollection(Point(1 1),LineString(2 2, 3 3))'),1));
+------------------------------------------------------------------------------------------+
| AsText(GeometryN(GeomFromText('GeometryCollection(Point(1 1),LineString(2 2, 3 3))'),1)) |
+------------------------------------------------------------------------------------------+
| POINT(1 1) |
+------------------------------------------------------------------------------------------+
Note: Functions for specific geometry type return NULL
if the passed geometry is of wrong geometry type.
For example Area() returns NULL if object type is neither
Polygon nor MultiPolygon.
In the section (see section 9.5.2 Functions To Analyse Geometry Properties), we've already discussed some functions that can construct new geometries from the exising ones:
Envelope(geometry g):geometry
StartPoint(LineString l):Point
EndPoint(LineString l):Point
PointN(LineString l,integer n):Point
ExteriorRing(Polygon p):LineString
InteriorRingN(Polygon p, Integer N):LineString
GeometryN(GeometryCollection g,integer N):Geometry
OpenGIS proposes a number of other functions that can produce geometries. They are designed to implement Spatial Operators.
Note: These functions are not yet implemented. They should appear in the future releases.
Intersection(Geometry g1,g2):Geometry
Union(Geometry g1,g2):Geometry
Difference(Geometry g1,g2):Geometry
SymDifference(Geometry g1,g2):Geometry
Buffer(Geometry g, double d):Geometry
g
is less than or equal to distance of d.
ConvexHull(Geometry g):Geometry
The functions described in this sections take two geometries as input parameters and return a qualitive or quantitive relation between them.
The current release provides some functions that can test relations between mininal bounding rectangles of two geometries. They include:
MBRContains(geom1,geom2)
geom1
contains the Minimum Bounding Rectangle of geom2.
Otherwise, 0 is returned.
mysql> SELECT MBRContains(GeomFromText('Polygon((0 0,0 3,3 3,3 0,0 0))'),GeomFromText('Point(1 1)'));
+----------------------------------------------------------------------------------------+
| MBRContains(GeomFromText('Polygon((0 0,0 3,3 3,3 0,0 0))'),GeomFromText('Point(1 1)')) |
+----------------------------------------------------------------------------------------+
| 1 |
+----------------------------------------------------------------------------------------+
MBRWithin(geom1,geom2)
geom1
is within the Minimum Bounding Rectangle of geom2.
Otherwise, 0 is returned.
mysql> SELECT MBRWithin(GeomFromText('Polygon((0 0,0 3,3 3,3 0,0 0))'),GeomFromText('Polygon((0 0,0 5,5 5,5 0,0 0))'));
+----------------------------------------------------------------------------------------------------------+
| MBRWithin(GeomFromText('Polygon((0 0,0 3,3 3,3 0,0 0))'),GeomFromText('Polygon((0 0,0 5,5 5,5 0,0 0))')) |
+----------------------------------------------------------------------------------------------------------+
| 1 |
+----------------------------------------------------------------------------------------------------------+
MBRDisjoint(geom1,geom2)
MBREqual(geom1,geom2)
MBRIntersects(geom1,geom2)
MBROverlaps(geom1,geom2)
MBRTouches(geom1,geom2)
Note: The functions given in the list below are not yet implemented. These functions will provide full (not MBR-based only) support for spatial analysis.
Contains(geom1,geom2)
geom1 completely contains geom2,
otherwise 0 is returned.
Crosses(geom1,geom2)
geom1 spatially crosses geom2.
If geom1 is a polygon or a multipolygon, NULL is returned.
If geom2 is a point or a multipoint, NULL is returned.
Otherwise 0 is returned.
The term spatially crosses denotes a spatial relation when two given
geometries intersect, and their intersection results in a geometry that has
a dimension that is one less than the maximum dimension of the two given
geometries, and the intersection is not equal to any of the two given geometries.
Disjoint(geom1,geom2)
geom1 is spatially disjoint from geom2,
i.e. if given geometries do not intersect. Otherwise, 0 is returned.
Equal(geom1,geom2)
geom1 is spatially equal to geom2,
otherwise 0 is returned.
Intersects(geom1,geom2)
geom1 spatially intersects geom2,
otherwise 0 is returned.
Overlaps(geom1,geom2)
geom1 spatially overlaps geom2,
otherwise, 0 is returned. Term spatially overlaps is used if two
geometries intersect and their intersection results in a geometry of the
same dimension but not equal to either of the given geometries.
Touches(geom1,geom2)
geom1 spatially touches geom2,
otherwise, 0 is returned. Two geometries spatially touch if the interior
of both geometries do not intersect, but the boundary of one of the
geometries intersects either the boundary or the interioir of the
geometries.
Within(geom1,geom2)
geom1 is spatially within geom2,
otherwise, 0 is returned.
Distance(geom1:Geometry,geom2:Geometry):Double
It is known that search operations in usual databases can be optimised using indexes. This is still true for spatial databases. With help of great variery of multi-dimensional indexing methods which have already been designed in the world, it's possible to optimise spatial searches, the most typical of which are:
MySQL utilises R-Trees with quadratic splitting to index
spatial columns. A spatial index is built using the MBR of a geometry.
For most geometries, the MBR is a minimum rectangle that
surrounds the geometries. For a horizontal or a vertical
linestring as well as for a point, the MBR is a degenerated rectangle,
into the linestring and the point respectively.
MySQL can create spatial indexes in the same way it
can create regular indexes. The normal syntax for creating
indexes is extended with the SPATIAL keyword:
CREATE TABLE:
CREATE TABLE g (g GEOMETRY NOT NULL, SPATIAL INDEX(g));
CREATE INDEX:
CREATE SPATIAL INDEX sp_index ON g (g);
ALTER TABLE:
ALTER TABLE g (ADD SPATIAL KEY(g));
Let's say we have a database with more than 32000 geometries.
Geometries are stored in the field g of type GEOMETRY.
The table also has a field fid, storing object IDs, with the
AUTO_INCREMENT attribute.
mysql> SHOW FIELDS FROM g; +-------+----------+-----------+------+-----+---------+----------------+ | Field | Type | Collation | Null | Key | Default | Extra | +-------+----------+-----------+------+-----+---------+----------------+ | fid | int(11) | binary | | PRI | NULL | auto_increment | | g | geometry | binary | | | | | +-------+----------+-----------+------+-----+---------+----------------+ 2 rows in set (0.00 sec) mysql> SELECT COUNT(*) FROM g; +----------+ | count(*) | +----------+ | 32376 | +----------+ 1 row in set (0.00 sec)
Let's add a spatial index:
mysql> ALTER TABLE g ADD SPATIAL KEY(g); Query OK, 32376 rows affected (4.05 sec) Records: 32376 Duplicates: 0 Warnings: 0
The optimiser investigates if available spatial indexes can be involved
in the search whenever a query with a function like MBRContains()
or MBRWithin() in the WHERE clause is executed.
For example, let's say we want to find all objects that are in the given rectangle:
mysql> SELECT fid,AsText(g) FROM g WHERE
mysql> MBRContains(GeomFromText('Polygon((30000 15000,31000 15000,31000 16000,30000 16000,30000 15000))'),g);
+-----+-----------------------------------------------------------------------------+
| fid | AsText(g) |
+-----+-----------------------------------------------------------------------------+
| 21 | LINESTRING(30350.4 15828.8,30350.6 15845,30333.8 15845,30333.8 15828.8) |
| 22 | LINESTRING(30350.6 15871.4,30350.6 15887.8,30334 15887.8,30334 15871.4) |
| 23 | LINESTRING(30350.6 15914.2,30350.6 15930.4,30334 15930.4,30334 15914.2) |
| 24 | LINESTRING(30290.2 15823,30290.2 15839.4,30273.4 15839.4,30273.4 15823) |
| 25 | LINESTRING(30291.4 15866.2,30291.6 15882.4,30274.8 15882.4,30274.8 15866.2) |
| 26 | LINESTRING(30291.6 15918.2,30291.6 15934.4,30275 15934.4,30275 15918.2) |
| 249 | LINESTRING(30337.8 15938.6,30337.8 15946.8,30320.4 15946.8,30320.4 15938.4) |
| 1 | LINESTRING(30250.4 15129.2,30248.8 15138.4,30238.2 15136.4,30240 15127.2) |
| 2 | LINESTRING(30220.2 15122.8,30217.2 15137.8,30207.6 15136,30210.4 15121) |
| 3 | LINESTRING(30179 15114.4,30176.6 15129.4,30167 15128,30169 15113) |
| 4 | LINESTRING(30155.2 15121.4,30140.4 15118.6,30142 15109,30157 15111.6) |
| 5 | LINESTRING(30192.4 15085,30177.6 15082.2,30179.2 15072.4,30194.2 15075.2) |
| 6 | LINESTRING(30244 15087,30229 15086.2,30229.4 15076.4,30244.6 15077) |
| 7 | LINESTRING(30200.6 15059.4,30185.6 15058.6,30186 15048.8,30201.2 15049.4) |
| 10 | LINESTRING(30179.6 15017.8,30181 15002.8,30190.8 15003.6,30189.6 15019) |
| 11 | LINESTRING(30154.2 15000.4,30168.6 15004.8,30166 15014.2,30151.2 15009.8) |
| 13 | LINESTRING(30105 15065.8,30108.4 15050.8,30118 15053,30114.6 15067.8) |
| 154 | LINESTRING(30276.2 15143.8,30261.4 15141,30263 15131.4,30278 15134) |
| 155 | LINESTRING(30269.8 15084,30269.4 15093.4,30258.6 15093,30259 15083.4) |
| 157 | LINESTRING(30128.2 15011,30113.2 15010.2,30113.6 15000.4,30128.8 15001) |
+-----+-----------------------------------------------------------------------------+
20 rows in set (0.00 sec)
Now let's check the way this query is executed, using EXPLAIN:
mysql> EXPLAIN SELECT fid,AsText(g) FROM g WHERE
mysql> MBRContains(GeomFromText('Polygon((30000 15000,31000 15000,31000 16000,30000 16000,30000 15000))'),g);
+----+-------------+-------+-------+---------------+------+---------+------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+-------+---------------+------+---------+------+------+-------------+
| 1 | SIMPLE | g | range | g | g | 32 | NULL | 50 | Using where |
+----+-------------+-------+-------+---------------+------+---------+------+------+-------------+
1 row in set (0.00 sec)
Now let's check what would happen if we didn't have a spatial index:
mysql> EXPLAIN SELECT fid,AsText(g) FROM g IGNORE INDEX (g) WHERE
mysql> MBRContains(GeomFromText('Polygon((30000 15000,31000 15000,31000 16000,30000 16000,30000 15000))'),g);
+----+-------------+-------+------+---------------+------+---------+------+-------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+------+---------------+------+---------+------+-------+-------------+
| 1 | SIMPLE | g | ALL | NULL | NULL | NULL | NULL | 32376 | Using where |
+----+-------------+-------+------+---------------+------+---------+------+-------+-------------+
1 row in set (0.00 sec)
Lets execute the above query, ignoring the spatial key we have:
mysql> SELECT fid,AsText(g) FROM g IGNORE INDEX (g) WHERE
mysql> MBRContains(GeomFromText('Polygon((30000 15000,31000 15000,31000 16000,30000 16000,30000 15000))'),g);
+-----+-----------------------------------------------------------------------------+
| fid | AsText(g) |
+-----+-----------------------------------------------------------------------------+
| 1 | LINESTRING(30250.4 15129.2,30248.8 15138.4,30238.2 15136.4,30240 15127.2) |
| 2 | LINESTRING(30220.2 15122.8,30217.2 15137.8,30207.6 15136,30210.4 15121) |
| 3 | LINESTRING(30179 15114.4,30176.6 15129.4,30167 15128,30169 15113) |
| 4 | LINESTRING(30155.2 15121.4,30140.4 15118.6,30142 15109,30157 15111.6) |
| 5 | LINESTRING(30192.4 15085,30177.6 15082.2,30179.2 15072.4,30194.2 15075.2) |
| 6 | LINESTRING(30244 15087,30229 15086.2,30229.4 15076.4,30244.6 15077) |
| 7 | LINESTRING(30200.6 15059.4,30185.6 15058.6,30186 15048.8,30201.2 15049.4) |
| 10 | LINESTRING(30179.6 15017.8,30181 15002.8,30190.8 15003.6,30189.6 15019) |
| 11 | LINESTRING(30154.2 15000.4,30168.6 15004.8,30166 15014.2,30151.2 15009.8) |
| 13 | LINESTRING(30105 15065.8,30108.4 15050.8,30118 15053,30114.6 15067.8) |
| 21 | LINESTRING(30350.4 15828.8,30350.6 15845,30333.8 15845,30333.8 15828.8) |
| 22 | LINESTRING(30350.6 15871.4,30350.6 15887.8,30334 15887.8,30334 15871.4) |
| 23 | LINESTRING(30350.6 15914.2,30350.6 15930.4,30334 15930.4,30334 15914.2) |
| 24 | LINESTRING(30290.2 15823,30290.2 15839.4,30273.4 15839.4,30273.4 15823) |
| 25 | LINESTRING(30291.4 15866.2,30291.6 15882.4,30274.8 15882.4,30274.8 15866.2) |
| 26 | LINESTRING(30291.6 15918.2,30291.6 15934.4,30275 15934.4,30275 15918.2) |
| 154 | LINESTRING(30276.2 15143.8,30261.4 15141,30263 15131.4,30278 15134) |
| 155 | LINESTRING(30269.8 15084,30269.4 15093.4,30258.6 15093,30259 15083.4) |
| 157 | LINESTRING(30128.2 15011,30113.2 15010.2,30113.6 15000.4,30128.8 15001) |
| 249 | LINESTRING(30337.8 15938.6,30337.8 15946.8,30320.4 15946.8,30320.4 15938.4) |
+-----+-----------------------------------------------------------------------------+
20 rows in set (0.46 sec)
The execution time for this query rises from 0.00 seconds to 0.46 seconds, when the index is not used.
In the future releases, spatial indexes will also be used for optimising other functions. See section 9.5.4 Functions For Testing Spatial Relations Between Geometric Objects.
AddGeometryColumn() and DropGeometryColumn()
functions correspondently. In MySQL this is to be done using the
regular ALTER TABLE command instead.
Length() and Area() assume planar
coordinate system.
Length() on LineString and MultiLineString currently should be called as GLength().
Length()
and sometimes it's not possible to distinguish if the function was
called in the textual or spatial context. We need either to solve this
somehow, or decide on another function name.
This chapter describes a lot of things that you need to know when
working on the MySQL code. If you plan to contribute to MySQL
development, want to have access to the bleeding-edge in-between
versions code, or just want to keep track of development, follow the
instructions in section 2.3.4 Installing from the Development Source Tree.
If you are interested in MySQL internals, you should also subscribe
to our internals mailing list. This list is relatively low
traffic. For details on how to subscribe, please see
section 1.7.1.1 The MySQL Mailing Lists.
All developers at MySQL AB are on the internals list and we
help other people who are working on the MySQL code. Feel free to
use this list both to ask questions about the code and to send
patches that you would like to contribute to the MySQL project!
The MySQL server creates the following threads:
process_alarm() to force timeouts on connections
that have been idle too long.
mysqld is compiled with -DUSE_ALARM_THREAD, a dedicated
thread that handles alarms is created. This is only used on some systems where
there are problems with sigwait() or if one wants to use the
thr_alarm() code in ones application without a dedicated signal
handling thread.
--flush_time=# option, a dedicated thread is created
to flush all tables at the given interval.
INSERT DELAYED gets its
own thread.
--master-host, a slave replication thread will be
started to read and apply updates from the master.
mysqladmin processlist only shows the connection, INSERT DELAYED,
and replication threads.
Until recently, our main full-coverage test suite was based on proprietary
customer data and for that reason has not been publicly available. The only
publicly available part of our testing process consisted of the crash-me
test, a Perl DBI/DBD benchmark found in the sql-bench directory, and
miscellaneous tests located in tests directory. The lack of a
standardised publicly available test suite has made it difficult for our users,
as well developers, to do regression tests on the MySQL code. To
address this problem, we have created a new test system that is included in
the source and binary distributions starting in Version 3.23.29.
The current set of test cases doesn't test everything in MySQL, but it should catch most obvious bugs in the SQL processing code, OS/library issues, and is quite thorough in testing replication. Our eventual goal is to have the tests cover 100% of the code. We welcome contributions to our test suite. You may especially want to contribute tests that examine the functionality critical to your system, as this will ensure that all future MySQL releases will work well with your applications.
The test system consist of a test language interpreter
(mysqltest), a shell script to run all
tests(mysql-test-run), the actual test cases written in a special
test language, and their expected results. To run the test suite on
your system after a build, type make test or
mysql-test/mysql-test-run from the source root. If you have
installed a binary distribution, cd to the install root
(eg. /usr/local/mysql), and do scripts/mysql-test-run.
All tests should succeed. If not, you should try to find out why and
report the problem if this is a bug in MySQL.
See section 10.1.2.3 Reporting Bugs in the MySQL Test Suite.
If you have a copy of mysqld running on the machine where you want to
run the test suite you do not have to stop it, as long as it is not using
ports 9306 and 9307. If one of those ports is taken, you should
edit mysql-test-run and change the values of the master and/or slave
port to one that is available.
You can run one individual test case with
mysql-test/mysql-test-run test_name.
If one test fails, you should test running mysql-test-run with
the --force option to check if any other tests fails.
You can use the mysqltest language to write your own test cases.
Unfortunately, we have not yet written full documentation for it.
You can, however, look at our current test cases and use
them as an example. The following points should help you get started:
mysql-test/t/*.test
; terminated statements and is similar to the
input of mysql command-line client. A statement by default is a query
to be sent to MySQL server, unless it is recognised as internal
command (eg. sleep).
SELECT, SHOW,
EXPLAIN, etc., must be preceded with @/path/to/result/file. The
file must contain the expected results. An easy way to generate the result
file is to run mysqltest -r < t/test-case-name.test from
mysql-test directory, and then edit the generated result files, if
needed, to adjust them to the expected output. In that case, be very careful
about not adding or deleting any invisible characters - make sure to only
change the text and/or delete lines. If you have to insert a line, make sure
the fields are separated with a hard tab, and there is a hard tab at the end.
You may want to use od -c to make sure your text editor has not messed
anything up during edit. We, of course, hope that you will never have to edit
the output of mysqltest -r as you only have to do it when you find a
bug.
mysql-test/r directory and name them test_name.result. If the
test produces more than one result, you should use test_name.a.result,
test_name.b.result, etc.
--error error-number. The error number can be
a list of possible error numbers separated with ','.
source include/master-slave.inc;. To switch between
master and slave, use connection master; and connection slave;.
If you need to do something on an alternate connection, you can do
connection master1; for the master, and connection slave1; for
the slave.
let $1=1000;
while ($1)
{
# do your queries here
dec $1;
}
sleep command. It supports fractions
of a second, so you can do sleep 1.3;, for example, to sleep 1.3
seconds.
mysql-test/t/test_name-slave.opt. For
the master, put them in mysql-test/t/test_name-master.opt.
If your MySQL version doesn't pass the test suite you should do the following:
mysqlbug script
so that we can get information about your system and MySQL
version. See section 1.7.1.3 How to Report Bugs or Problems.
mysql-test-run, as well as
contents of all .reject files in mysql-test/r directory.
cd mysql-test mysql-test-run --local test-nameIf this fails, then you should configure MySQL with
--with-debug and run mysql-test-run with the
--debug option. If this also fails send the trace file
`var/tmp/master.trace' to ftp://support.mysql.com/pub/mysql/secret
so that we can examine it. Please remember to also include a full
description of your system, the version of the mysqld binary and how you
compiled it.
mysql-test-run with the --force option to
see if there is any other test that fails.
Result length mismatch or Result
content mismatch it means that the output of the test didn't match
exactly the expected output. This could be a bug in MySQL or
that your mysqld version produces slight different results under some
circumstances.
Failed test results are put in a file with the same base name as the
result file with the .reject extension. If your test case is
failing, you should do a diff on the two files. If you cannot see how
they are different, examine both with od -c and also check their
lengths.
mysql-test/var/log directory for hints of what went wrong.
mysql-test-run with the --gdb and/or --debug
options.
See section E.1.2 Creating Trace Files.
If you have not compiled MySQL for debugging you should probably
do that. Just specify the --with-debug options to configure!
See section 2.3 Installing a MySQL Source Distribution.
There are two ways to add new functions to MySQL:
CREATE FUNCTION and DROP FUNCTION statements.
See section 10.2.1 CREATE FUNCTION/DROP FUNCTION Syntax.
mysqld server and become
available on a permanent basis.
Each method has advantages and disadvantages:
Whichever method you use to add new functions, they may be used just like
native functions such as ABS() or SOUNDEX().
CREATE FUNCTION/DROP FUNCTION Syntax
CREATE [AGGREGATE] FUNCTION function_name RETURNS {STRING|REAL|INTEGER}
SONAME shared_library_name
DROP FUNCTION function_name
A user-definable function (UDF) is a way to extend MySQL with a new
function that works like native (built in) MySQL function such as
ABS() and CONCAT().
AGGREGATE is a new option for MySQL Version 3.23. An
AGGREGATE function works exactly like a native MySQL
GROUP function like SUM or COUNT().
CREATE FUNCTION saves the function's name, type, and shared library
name in the mysql.func system table. You must have the
INSERT and DELETE privileges for the mysql database
to create and drop functions.
All active functions are reloaded each time the server starts, unless
you start mysqld with the --skip-grant-tables option. In
this case, UDF initialisation is skipped and UDFs are unavailable.
(An active function is one that has been loaded with CREATE FUNCTION
and not removed with DROP FUNCTION.)
For instructions on writing user-definable functions, see section 10.2 Adding New Functions to MySQL. For the UDF mechanism to work, functions must be written in C or
C++, your operating system must support dynamic loading and you must have
compiled mysqld dynamically (not statically).
Note that to make AGGREGATE work, you must have a
mysql.func table that contains the column type. If this
is not the case, you should run the script
mysql_fix_privilege_tables to get this fixed.
For the UDF mechanism to work, functions must be written in C or C++ and your operating system must support dynamic loading. The MySQL source distribution includes a file `sql/udf_example.cc' that defines 5 new functions. Consult this file to see how UDF calling conventions work.
For mysqld to be able to use UDF functions, you should configure MySQL
with --with-mysqld-ldflags=-rdynamic The reason is that to on
many platforms (including Linux) you can load a dynamic library (with
dlopen()) from a static linked program, which you would get if
you are using --with-mysqld-ldflags=-all-static If you want to
use an UDF that needs to access symbols from mysqld (like the
methaphone example in `sql/udf_example.cc' that uses
default_charset_info), you must link the program with
-rdynamic (see man dlopen).
For each function that you want to use in SQL statements, you should define
corresponding C (or C++) functions. In the discussion below, the name
``xxx'' is used for an example function name. To distinguish between SQL and
C/C++ usage, XXX() (uppercase) indicates a SQL function call, and
xxx() (lowercase) indicates a C/C++ function call.
The C/C++ functions that you write to implement the interface for
XXX() are:
xxx() (required)
| SQL type | C/C++ type |
STRING | char *
|
INTEGER | long long
|
REAL | double
|
xxx_init() (optional)
xxx(). It can be used to:
XXX().
REAL functions) the maximum number of decimals.
NULL.
xxx_deinit() (optional)
xxx(). It should deallocate any
memory allocated by the initialisation function.
When a SQL statement invokes XXX(), MySQL calls the
initialisation function xxx_init() to let it perform any required
setup, such as argument checking or memory allocation. If xxx_init()
returns an error, the SQL statement is aborted with an error message and the
main and deinitialisation functions are not called. Otherwise, the main
function xxx() is called once for each row. After all rows have been
processed, the deinitialisation function xxx_deinit() is called so it
can perform any required cleanup.
For aggregate functions (like SUM()), you must also provide the
following functions:
xxx_reset() (required)
xxx_add() (required)
When using aggregate UDFs, MySQL works the following way:
xxx_init() to let the aggregate function allocate the memory it
will need to store results.
GROUP BY expression.
xxx_reset() function.
xxx_add() function.
xxx() to get the result for the aggregate.
xxx_deinit() to let the UDF free any memory it has allocated.
All functions must be thread-safe (not just the main function,
but the initialisation and deinitialisation functions as well). This means
that you are not allowed to allocate any global or static variables that
change! If you need memory, you should allocate it in xxx_init()
and free it in xxx_deinit().
The main function should be declared as shown here. Note that the return
type and parameters differ, depending on whether you will declare the SQL
function XXX() to return STRING, INTEGER, or REAL
in the CREATE FUNCTION statement:
For STRING functions:
char *xxx(UDF_INIT *initid, UDF_ARGS *args,
char *result, unsigned long *length,
char *is_null, char *error);
For INTEGER functions:
long long xxx(UDF_INIT *initid, UDF_ARGS *args,
char *is_null, char *error);
For REAL functions:
double xxx(UDF_INIT *initid, UDF_ARGS *args,
char *is_null, char *error);
The initialisation and deinitialisation functions are declared like this:
my_bool xxx_init(UDF_INIT *initid, UDF_ARGS *args, char *message); void xxx_deinit(UDF_INIT *initid);
The initid parameter is passed to all three functions. It points to a
UDF_INIT structure that is used to communicate information between
functions. The UDF_INIT structure members are listed below. The
initialisation function should fill in any members that it wishes to change.
(To use the default for a member, leave it unchanged.):
my_bool maybe_null
xxx_init() should set maybe_null to 1 if xxx()
can return NULL. The default value is 1 if any of the
arguments are declared maybe_null.
unsigned int decimals
1.34, 1.345, and 1.3, the default would be 3,
because 1.345 has 3 decimals.
unsigned int max_length
initid->decimals. (For numeric functions, the length
includes any sign or decimal point characters.)
If you want to return a blob, you can set this to 65K or 16M; this
memory is not allocated but used to decide which column type to use if
there is a need to temporary store the data.
char *ptr
initid->ptr to communicate allocated memory
between functions. In xxx_init(), allocate the memory and assign it
to this pointer:
initid->ptr = allocated_memory;In
xxx() and xxx_deinit(), refer to initid->ptr to use
or deallocate the memory.
Here follows a description of the different functions you need to define when you want to create an aggregate UDF function.
char *xxx_reset(UDF_INIT *initid, UDF_ARGS *args,
char *is_null, char *error);
This function is called when MySQL finds the first row in a new group. In the function you should reset any internal summary variables and then set the given argument as the first argument in the group.
In many cases this is implemented internally by reseting all variables
and then calling xxx_add().
char *xxx_add(UDF_INIT *initid, UDF_ARGS *args,
char *is_null, char *error);
This function is called for all rows that belongs to the same group, except for the first row. In this you should add the value in UDF_ARGS to your internal summary variable.
The xxx() function should be declared identical as when you
define a simple UDF function. See section 10.2.2.1 UDF Calling Sequences for simple functions.
This function is called when all rows in the group has been processed.
You should normally never access the args variable here but
return your value based on your internal summary variables.
All argument processing in xxx_reset() and xxx_add()
should be done identically as for normal UDFs. See section 10.2.2.3 Argument Processing.
The return value handling in xxx() should be done identically as
for a normal UDF. See section 10.2.2.4 Return Values and Error Handling.
The pointer argument to is_null and error is the same for
all calls to xxx_reset(), xxx_add() and xxx().
You can use this to remember that you got an error or if the xxx()
function should return NULL. Note that you should not store a string
into *error! This is just a 1 byte flag!
is_null is reset for each group (before calling xxx_reset().
error is never reset.
If isnull or error are set after xxx() then MySQL
will return NULL as the result for the group function.
The args parameter points to a UDF_ARGS structure that has the
members listed here:
unsigned int arg_count
if (args->arg_count != 2)
{
strcpy(message,"XXX() requires two arguments");
return 1;
}
enum Item_result *arg_type
STRING_RESULT, INT_RESULT, and REAL_RESULT.
To make sure that arguments are of a given type and return an
error if they are not, check the arg_type array in the initialisation
function. For example:
if (args->arg_type[0] != STRING_RESULT ||
args->arg_type[1] != INT_RESULT)
{
strcpy(message,"XXX() requires a string and an integer");
return 1;
}
As an alternative to requiring your function's arguments to be of particular
types, you can use the initialisation function to set the arg_type
elements to the types you want. This causes MySQL to coerce
arguments to those types for each call to xxx(). For example, to
specify coercion of the first two arguments to string and integer, do this in
xxx_init():
args->arg_type[0] = STRING_RESULT; args->arg_type[1] = INT_RESULT;
char **args
args->args communicates information to the initialisation function
about the general nature of the arguments your function was called with. For a
constant argument i, args->args[i] points to the argument
value. (See below for instructions on how to access the value properly.)
For a non-constant argument, args->args[i] is 0.
A constant argument is an expression that uses only constants, such as
3 or 4*7-2 or SIN(3.14). A non-constant argument is an
expression that refers to values that may change from row to row, such as
column names or functions that are called with non-constant arguments.
For each invocation of the main function, args->args contains the
actual arguments that are passed for the row currently being processed.
Functions can refer to an argument i as follows:
STRING_RESULT is given as a string pointer plus a
length, to allow handling of binary data or data of arbitrary length. The
string contents are available as args->args[i] and the string length
is args->lengths[i]. You should not assume that strings are
null-terminated.
INT_RESULT, you must cast
args->args[i] to a long long value:
long long int_val; int_val = *((long long*) args->args[i]);
REAL_RESULT, you must cast
args->args[i] to a double value:
double real_val; real_val = *((double*) args->args[i]);
unsigned long *lengths
lengths array indicates the
maximum string length for each argument. You should not change these.
For each invocation of the main function, lengths contains the
actual lengths of any string arguments that are passed for the row
currently being processed. For arguments of types INT_RESULT or
REAL_RESULT, lengths still contains the maximum length of
the argument (as for the initialisation function).
The initialisation function should return 0 if no error occurred and
1 otherwise. If an error occurs, xxx_init() should store a
null-terminated error message in the message parameter. The message
will be returned to the client. The message buffer is
MYSQL_ERRMSG_SIZE characters long, but you should try to keep the
message to less than 80 characters so that it fits the width of a standard
terminal screen.
The return value of the main function xxx() is the function value, for
long long and double functions. A string functions should
return a pointer to the result and store the length of the string in the
length arguments.
Set these to the contents and length of the return value. For example:
memcpy(result, "result string", 13); *length = 13;
The result buffer that is passed to the calc function is 255 byte
big. If your result fits in this, you don't have to worry about memory
allocation for results.
If your string function needs to return a string longer than 255 bytes,
you must allocate the space for it with malloc() in your
xxx_init() function or your xxx() function and free it in
your xxx_deinit() function. You can store the allocated memory
in the ptr slot in the UDF_INIT structure for reuse by
future xxx() calls. See section 10.2.2.1 UDF Calling Sequences for simple functions.
To indicate a return value of NULL in the main function, set
is_null to 1:
*is_null = 1;
To indicate an error return in the main function, set the error
parameter to 1:
*error = 1;
If xxx() sets *error to 1 for any row, the function
value is NULL for the current row and for any subsequent rows
processed by the statement in which XXX() was invoked. (xxx()
will not even be called for subsequent rows.) Note: in
MySQL versions prior to 3.22.10, you should set both *error
and *is_null:
*error = 1; *is_null = 1;
Files implementing UDFs must be compiled and installed on the host where the server runs. This process is described below for the example UDF file `udf_example.cc' that is included in the MySQL source distribution. This file contains the following functions:
metaphon() returns a metaphon string of the string argument.
This is something like a soundex string, but it's more tuned for English.
myfunc_double() returns the sum of the ASCII values of the
characters in its arguments, divided by the sum of the length of its arguments.
myfunc_int() returns the sum of the length of its arguments.
sequence([const int]) returns an sequence starting from the given
number or 1 if no number has been given.
lookup() returns the IP number for a hostname.
reverse_lookup() returns the hostname for an IP number.
The function may be called with a string "xxx.xxx.xxx.xxx" or
four numbers.
A dynamically loadable file should be compiled as a sharable object file, using a command something like this:
shell> gcc -shared -o udf_example.so myfunc.cc
You can easily find out the correct compiler options for your system by running this command in the `sql' directory of your MySQL source tree:
shell> make udf_example.o
You should run a compile command similar to the one that make displays,
except that you should remove the -c option near the end of the line
and add -o udf_example.so to the end of the line. (On some systems,
you may need to leave the -c on the command.)
Once you compile a shared object containing UDFs, you must install it and
tell MySQL about it. Compiling a shared object from `udf_example.cc'
produces a file named something like `udf_example.so' (the exact name
may vary from platform to platform). Copy this file to some directory
searched by the dynamic linker ld, such as `/usr/lib' or add the
directory in which you placed the shared object to the linker configuration
file (e.g. `/etc/ld.so.conf').
On many systems, you can also set the LD_LIBRARY or
LD_LIBRARY_PATH environment variable to point at the directory where
you have your UDF function files. The dlopen manual page tells you
which variable you should use on your system. You should set this in
mysql.server or safe_mysqld startup scripts and restart
mysqld.
After the library is installed, notify mysqld about the new
functions with these commands:
mysql> CREATE FUNCTION metaphon RETURNS STRING SONAME "udf_example.so";
mysql> CREATE FUNCTION myfunc_double RETURNS REAL SONAME "udf_example.so";
mysql> CREATE FUNCTION myfunc_int RETURNS INTEGER SONAME "udf_example.so";
mysql> CREATE FUNCTION lookup RETURNS STRING SONAME "udf_example.so";
mysql> CREATE FUNCTION reverse_lookup
-> RETURNS STRING SONAME "udf_example.so";
mysql> CREATE AGGREGATE FUNCTION avgcost
-> RETURNS REAL SONAME "udf_example.so";
Functions can be deleted using DROP FUNCTION:
mysql> DROP FUNCTION metaphon; mysql> DROP FUNCTION myfunc_double; mysql> DROP FUNCTION myfunc_int; mysql> DROP FUNCTION lookup; mysql> DROP FUNCTION reverse_lookup; mysql> DROP FUNCTION avgcost;
The CREATE FUNCTION and DROP FUNCTION statements update the
system table func in the mysql database. The function's name,
type and shared library name are saved in the table. You must have the
INSERT and DELETE privileges for the mysql database
to create and drop functions.
You should not use CREATE FUNCTION to add a function that has already
been created. If you need to reinstall a function, you should remove it with
DROP FUNCTION and then reinstall it with CREATE FUNCTION. You
would need to do this, for example, if you recompile a new version of your
function, so that mysqld gets the new version. Otherwise, the server
will continue to use the old version.
Active functions are reloaded each time the server starts, unless you start
mysqld with the --skip-grant-tables option. In this case, UDF
initialisation is skipped and UDFs are unavailable. (An active function is
one that has been loaded with CREATE FUNCTION and not removed with
DROP FUNCTION.)
The procedure for adding a new native function is described here. Note that you cannot add native functions to a binary distribution because the procedure involves modifying MySQL source code. You must compile MySQL yourself from a source distribution. Also note that if you migrate to another version of MySQL (for example, when a new version is released), you will need to repeat the procedure with the new version.
To add a new native MySQL function, follow these steps:
sql_functions[] array.
sql_functions[] array and add a function that creates a function
object in `item_create.cc'. Take a look at "ABS" and
create_funcs_abs() for an example of this.
If the function prototype is complicated (for example takes a variable number
of arguments), you should add two lines to `sql_yacc.yy'. One
indicates the preprocessor symbol that yacc should define (this
should be added at the beginning of the file). Then define the function
parameters and add an ``item'' with these parameters to the
simple_expr parsing rule. For an example, check all occurrences
of ATAN in `sql_yacc.yy' to see how this is done.
Item_num_func or
Item_str_func, depending on whether your function returns a number or a
string.
double Item_func_newname::val() longlong Item_func_newname::val_int() String *Item_func_newname::Str(String *str)If you inherit your object from any of the standard items (like
Item_num_func), you probably only have to define one of the above
functions and let the parent object take care of the other functions.
For example, the Item_str_func class defines a val() function
that executes atof() on the value returned by ::str().
void Item_func_newname::fix_length_and_dec()This function should at least calculate
max_length based on the
given arguments. max_length is the maximum number of characters
the function may return. This function should also set maybe_null
= 0 if the main function can't return a NULL value. The
function can check if any of the function arguments can return
NULL by checking the arguments maybe_null variable. You
can take a look at Item_func_mod::fix_length_and_dec for a
typical example of how to do this.
All functions must be thread-safe (in other words, don't use any global or static variables in the functions without protecting them with mutexes).
If you want to return NULL, from ::val(), ::val_int()
or ::str() you should set null_value to 1 and return 0.
For ::str() object functions, there are some additional
considerations to be aware of:
String *str argument provides a string buffer that may be
used to hold the result. (For more information about the String type,
take a look at the `sql_string.h' file.)
::str() function should return the string that holds the result or
(char*) 0 if the result is NULL.
In MySQL, you can define a procedure in C++ that can access and
modify the data in a query before it is sent to the client. The modification
can be done on row-by-row or GROUP BY level.
We have created an example procedure in MySQL Version 3.23 to show you what can be done.
Additionally we recommend you to take a look at mylua.
With this you can use the LUA language to load a procedure at
runtime into mysqld.
analyse([max elements,[max memory]])
This procedure is defined in the `sql/sql_analyse.cc'. This examines the result from your query and returns an analysis of the results:
max elements (default 256) is the maximum number of distinct values
analyse will notice per column. This is used by analyse to check if
the optimal column type should be of type ENUM.
max memory (default 8192) is the maximum memory analyse should
allocate per column while trying to find all distinct values.
SELECT ... FROM ... WHERE ... PROCEDURE ANALYSE([max elements,[max memory]])
For the moment, the only documentation for this is the source.
You can find all information about procedures by examining the following files:
This chapter lists some common problems and error messages that users have run into. You will learn how to figure out what the problem is, and what to do to solve it. You will also find proper solutions to some common problems.
When you run into problems, the first thing you should do is to find out which program / piece of equipment is causing problems:
kbd_mode -a on it.
top, ps, taskmanager, or some similar program,
to check which program is taking all CPU or is locking the machine.
top, df, or a similar program if you are out of
memory, disk space, open files, or some other critical resource.
If after you have examined all other possibilities and you have concluded that it's the MySQL server or a MySQL client that is causing the problem, it's time to do a bug report for our mailing list or our support team. In the bug report, try to give a very detailed description of how the system is behaving and what you think is happening. You should also state why you think it's MySQL that is causing the problems. Take into consideration all the situations in this chapter. State any problems exactly how they appear when you examine your system. Use the 'cut and paste' method for any output and/or error messages from programs and/or log files!
Try to describe in detail which program is not working and all symptoms you see! We have in the past received many bug reports that just state "the system doesn't work". This doesn't provide us with any information about what could be the problem.
If a program fails, it's always useful to know:
top. Let the
program run for a while, it may be evaluating something heavy.
mysqld server that is causing problems, can you
do mysqladmin -u root ping or mysqladmin -u root processlist?
mysql, for example)
when you try to connect to the MySQL server?
Does the client jam? Do you get any output from the program?
When sending a bug report, you should of follow the outlines described in this manual. See section 1.7.1.2 Asking Questions or Reporting Bugs.
This section lists some errors that users frequently get. You will find descriptions of the errors, and how to solve the problem here.
Access denied Error
See section 4.2.11 Causes of Access denied Errors.
See section 4.2.6 How the Privilege System Works.
MySQL server has gone away Error
This section also covers the related Lost connection to server
during query error.
The most common reason for the MySQL server has gone away error
is that the server timed out and closed the connection. By default, the
server closes the connection after 8 hours if nothing has happened. You
can change the time limit by setting the wait_timeout variable when
you start mysqld.
Another common reason to receive the MySQL server has gone away error
is because you have issued a ``close'' on your MySQL connection
and then tried to run a query on the closed connection.
If you have a script, you just have to issue the query again for the client to do an automatic reconnection.
You normally can get the following error codes in this case (which one you get is OS-dependent):
| Error code | Description |
CR_SERVER_GONE_ERROR | The client couldn't send a question to the server. |
CR_SERVER_LOST | The client didn't get an error when writing to the server, but it didn't get a full answer (or any answer) to the question. |
You will also get this error if someone has kills the running thread with
kill #threadid#.
You can check that the MySQL hasn't died by executing mysqladmin
version and examining the uptime. If the problem is that mysqld
crashed you should concentrate one finding the reason for the crash.
You should in this case start by checking if issuing the query again
will kill MySQL again. See section A.4.1 What To Do If MySQL Keeps Crashing.
You can also get these errors if you send a query to the server that is
incorrect or too large. If mysqld gets a packet that is too large
or out of order, it assumes that something has gone wrong with the client and
closes the connection. If you need big queries (for example, if you are
working with big BLOB columns), you can increase the query limit by
starting mysqld with the -O max_allowed_packet=# option
(default 1M). The extra memory is allocated on demand, so mysqld will
allocate more memory only when you issue a big query or when mysqld must
return a big result row!
You will also get a lost connection if you are sending a packet >= 16M if your client is older than 4.0.8 and your server is 4.0.8 and above, or the other way around.
If you want to make a bug report regarding this problem, be sure that you include the following information:
hostname.err file. See section A.4.1 What To Do If MySQL Keeps Crashing.
mysqld and the involved tables where
checked with CHECK TABLE before you did the query, can you do
a test case for this? See section E.1.6 Making a Test Case If You Experience Table Corruption.
wait_timeout variable in the MySQL server ?
mysqladmin variables gives you the value of this
mysqld with --log and check if the
issued query appears in the log ?
See section 1.7.1.2 Asking Questions or Reporting Bugs.
Can't connect to [local] MySQL server Error
A MySQL client on Unix can connect to the mysqld server in two
different ways: Unix sockets, which connect through a file in the file
system (default `/tmp/mysqld.sock') or TCP/IP, which connects
through a port number. Unix sockets are faster than TCP/IP but can only
be used when connecting to a server on the same computer. Unix sockets
are used if you don't specify a hostname or if you specify the special
hostname localhost.
On Windows, if the mysqld server is running on 9x/Me, you can
connect only via TCP/IP. If the server is running on NT/2000/XP and
mysqld is started with --enable-named-pipe, you
can also connect with named pipes. The name of the named pipe is MySQL.
If you don't give a hostname when connecting to mysqld, a MySQL
client will first try to connect to the named pipe, and if this doesn't
work it will connect to the TCP/IP port. You can force the use of named
pipes on Windows by using . as the hostname.
The error (2002) Can't connect to ... normally means that there
isn't a MySQL server running on the system or that you are
using a wrong socket file or TCP/IP port when trying to connect to the
mysqld server.
Start by checking (using ps or the task manager on Windows) that
there is a process running named mysqld on your server! If there
isn't any mysqld process, you should start one. See section 2.4.2 Problems Starting the MySQL Server.
If a mysqld process is running, you can check the server by
trying these different connections (the port number and socket pathname
might be different in your setup, of course):
shell> mysqladmin version shell> mysqladmin variables shell> mysqladmin -h `hostname` version variables shell> mysqladmin -h `hostname` --port=3306 version shell> mysqladmin -h 'ip for your host' version shell> mysqladmin --protocol=socket --socket=/tmp/mysql.sock version
Note the use of backquotes rather than forward quotes with the hostname
command; these cause the output of hostname (that is, the current
hostname) to be substituted into the mysqladmin command.
Here are some reasons the Can't connect to local MySQL server
error might occur:
mysqld is not running.
mysqld uses the MIT-pthreads package. See section 2.2.5 Operating Systems Supported by MySQL. However,
not all MIT-pthreads versions support Unix sockets. On a system
without sockets support you must always specify the hostname explicitly
when connecting to the server. Try using this command to check the
connection to the server:
shell> mysqladmin -h `hostname` version
mysqld uses (default
`/tmp/mysqld.sock'). You might have a cron job that removes
the MySQL socket (for example, a job that removes old files
from the `/tmp' directory). You can always run mysqladmin
version and check that the socket mysqladmin is trying to use
really exists. The fix in this case is to change the cron job to
not remove `mysqld.sock' or to place the socket somewhere else.
See section A.4.5 How to Protect or Change the MySQL Socket File `/tmp/mysql.sock'.
mysqld server with
the --socket=/path/to/socket option. If you change the socket
pathname for the server, you must also notify the MySQL clients
about the new path. You can do this by providing the socket path
as an argument to the client. See section A.4.5 How to Protect or Change the MySQL Socket File `/tmp/mysql.sock'.
mysqld threads (for example, with the
mysql_zap script before you can start a new MySQL
server. See section A.4.1 What To Do If MySQL Keeps Crashing.
mysqld so that it uses a directory that you can access.
If you get the error message Can't connect to MySQL server on
some_hostname, you can try the following things to find out what the
problem is :
telnet your-host-name
tcp-ip-port-number and press Enter a couple of times. If there
is a MySQL server running on this port you should get a
responses that includes the version number of the running MySQL
server. If you get an error like telnet: Unable to connect to
remote host: Connection refused, then there is no server running on the
given port.
mysqld daemon on the local machine and check
the TCP/IP port that mysqld it's configured to use (variable port) with
mysqladmin variables.
mysqld server is not started with the
--skip-networking option.
Host '...' is blocked ErrorIf you get an error like this:
Host 'hostname' is blocked because of many connection errors. Unblock with 'mysqladmin flush-hosts'
this means that mysqld has gotten a lot (max_connect_errors)
of connect requests from the host 'hostname' that have been interrupted
in the middle. After max_connect_errors failed requests, mysqld
assumes that something is wrong (like an attack from a cracker), and
blocks the site from further connections until someone executes the command
mysqladmin flush-hosts.
By default, mysqld blocks a host after 10 connection errors.
You can easily adjust this by starting the server like this:
shell> safe_mysqld -O max_connect_errors=10000 &
Note that if you get this error message for a given host, you should first
check that there isn't anything wrong with TCP/IP connections from that
host. If your TCP/IP connections aren't working, it won't do you any good to
increase the value of the max_connect_errors variable!
Too many connections Error
If you get the error Too many connections when you try to connect
to MySQL, this means that there is already max_connections
clients connected to the mysqld server.
If you need more connections than the default (100), then you should restart
mysqld with a bigger value for the max_connections variable.
Note that mysqld actually allows (max_connections+1)
clients to connect. The last connection is reserved for a user with the
SUPER privilege. By not giving this privilege to normal
users (they shouldn't need this), an administrator with this privilege
can log in and use SHOW PROCESSLIST to find out what could be
wrong. See section 4.5.7.6 SHOW PROCESSLIST.
The maximum number of connects MySQL is depending on how good the thread library is on a given platform. Linux or Solaris should be able to support 500-1000 simultaneous connections, depending on how much RAM you have and what your clients are doing.
Some non-transactional changed tables couldn't be rolled back Error
If you get the error/warning: Warning: Some non-transactional
changed tables couldn't be rolled back when trying to do a
ROLLBACK, this means that some of the tables you used in the
transaction didn't support transactions. These non-transactional tables
will not be affected by the ROLLBACK statement.
The most typical case when this happens is when you have tried to create
a table of a type that is not supported by your mysqld binary.
If mysqld doesn't support a table type (or if the table type is
disabled by a startup option) , it will instead create the table type
with the table type that is most resembles to the one you requested,
probably MyISAM.
You can check the table type for a table by doing:
SHOW TABLE STATUS LIKE 'table_name'. See section 4.5.7.2 SHOW TABLE STATUS.
You can check the extensions your mysqld binary supports by doing:
show variables like 'have_%'. See section 4.5.7.4 SHOW VARIABLES.
Out of memory ErrorIf you issue a query and get something like the following error:
mysql: Out of memory at line 42, 'malloc.c' mysql: needed 8136 byte (8k), memory in use: 12481367 bytes (12189k) ERROR 2008: MySQL client ran out of memory
note that the error refers to the MySQL client mysql. The
reason for this error is simply that the client does not have enough memory to
store the whole result.
To remedy the problem, first check that your query is correct. Is it
reasonable that it should return so many rows? If so,
you can use mysql --quick, which uses mysql_use_result()
to retrieve the result set. This places less of a load on the client (but
more on the server).
Packet too large Error
When a MySQL client or the mysqld server gets a packet bigger
than max_allowed_packet bytes, it issues a Packet too large
error and closes the connection.
In MySQL 3.23 the biggest possible packet is 16M (due to limits in the client/server protocol). In MySQL 4.0.1 and up, this is only limited by the amount on memory you have on your server (up to a theoretical maximum of 2G).
A communication packet is a single SQL statement sent to the MySQL server or a single row that is sent to the client.
When a MySQL client or the mysqld server gets a packet bigger
than max_allowed_packet bytes, it issues a Packet too
large error and closes the connection. With some clients, you may also
get Lost connection to MySQL server during query error if the
communication packet is too big.
Note that both the client and the server has it's own
max_allowed_packet variable. If you want to handle big packets,
you have to increase this variable both in the client and in the server.
It's safe to increase this variable as memory is only allocated when needed; this variable is more a precaution to catch wrong packets between the client/server and also to ensure that you don't accidentally use big packets so that you run out of memory.
If you are using the mysql client, you may specify a bigger
buffer by starting the client with
mysql --set-variable=max_allowed_packet=8M.
Other clients have different methods to set this variable.
Please note that --set-variable is deprecated since
MySQL 4.0, just use --max-allowed-packet=8M instead.
You can use the option file to set max_allowed_packet to a larger
size in mysqld. For example, if you are expecting to store the
full length of a MEDIUMBLOB into a table, you'll need to start
the server with the set-variable=max_allowed_packet=16M option.
You can also get strange problems with large packets if you are using
big blobs, but you haven't given mysqld access to enough memory
to handle the query. If you suspect this is the case, try adding
ulimit -d 256000 to the beginning of the safe_mysqld script
and restart mysqld.
Starting with MySQL 3.23.40 you only get the Aborted
connection error of you start mysqld with --warnings.
If you find errors like the following in your error log.
010301 14:38:23 Aborted connection 854 to db: 'users' user: 'josh'
See section 4.9.1 The Error Log.
This means that something of the following has happened:
mysql_close() before exit.
wait_timeout or
interactive_timeout without doing any requests.
See section 4.5.7.4 SHOW VARIABLES.
See section 4.5.7.4 SHOW VARIABLES.
When the above happens, the server variable Aborted_clients is
incremented.
The server variable Aborted_connects is incremented when:
connect_timeout seconds to get
a connect package.
See section 4.5.7.4 SHOW VARIABLES.
Note that the above could indicate that someone is trying to break into your database!
Other reasons for problems with Aborted clients / Aborted connections.
max_allowed_packet is too small or queries require more memory
than you have allocated for mysqld. See section A.2.8 Packet too large Error.
The table is full ErrorThere is a couple of different cases when you can get this error:
tmp_table_size bytes.
To avoid this problem, you can use the -O tmp_table_size=# option
to make mysqld increase the temporary table size or use the SQL
option BIG_TABLES before you issue the problematic query.
See section 5.5.6 SET Syntax.
You can also start mysqld with the --big-tables option.
This is exactly the same as using BIG_TABLES for all queries.
In MySQL Version 3.23, in-memory temporary tables will automatically be
converted to a disk-based MyISAM table after the table size gets
bigger than tmp_table_size.
InnoDB tables and run out of room in the
InnoDB tablespace. In this case the solution is to extend the
InnoDB tablespace.
ISAM or MyISAM tables on an OS that only
supports files of 2G in size and you have hit this limit for the data
or index file.
MyISAM tables and the needed data or index size is
bigger than what MySQL has allocated pointers for. (If you don't specify
MAX_ROWS to CREATE TABLE MySQL will only allocate pointers
to hold 4G of data).
You can check the maximum data/index sizes by doing
SHOW TABLE STATUS FROM database LIKE 'table_name';or using
myisamchk -dv database/table_name.
If this is the problem, you can fix it by doing something like:
ALTER TABLE table_name MAX_ROWS=1000000000 AVG_ROW_LENGTH=nnn;You only have to specify
AVG_ROW_LENGTH for tables with BLOB/TEXT
fields as in this case MySQL can't optimise the space required based
only on the number of rows.
Can't create/write to file ErrorIf you get an error for some queries of type:
Can't create/write to file '\\sqla3fe_0.ism'.
this means that MySQL can't create a temporary file for the
result set in the given temporary directory. (The above error is a
typical error message on Windows, and the Unix error message is similar.)
The fix is to start mysqld with --tmpdir=path or to add to your
option file:
[mysqld] tmpdir=C:/temp
assuming that the `c:\\temp' directory exists. See section 4.1.2 `my.cnf' Option Files.
Check also the error code that you get with perror. One reason
may also be a disk full error;
shell> perror 28 Error code 28: No space left on device
Commands out of sync Error in Client
If you get Commands out of sync; you can't run this command now
in your client code, you are calling client functions in the wrong order!
This can happen, for example, if you are using mysql_use_result() and
try to execute a new query before you have called mysql_free_result().
It can also happen if you try to execute two queries that return data without
a mysql_use_result() or mysql_store_result() in between.
Ignoring user ErrorIf you get the following error:
Found wrong password for user: 'some_user@some_host'; ignoring user
this means that when mysqld was started or when it reloaded the
permissions tables, it found an entry in the user table with
an invalid password. As a result, the entry is simply ignored by the
permission system.
Possible causes of and fixes for this problem:
mysqld with an old
user table.
You can check this by executing mysqlshow mysql user to see if
the password field is shorter than 16 characters. If so, you can correct this
condition by running the scripts/add_long_password script.
mysqld with the --old-protocol option.
Update the user in the user table with a new password or
restart mysqld with --old-protocol.
user table without using the
PASSWORD() function. Use mysql to update the user in the
user table with a new password. Make sure to use the PASSWORD()
function:
mysql> UPDATE user SET password=PASSWORD('your password')
-> WHERE user='XXX';
Table 'xxx' doesn't exist Error
If you get the error Table 'xxx' doesn't exist or Can't
find file: 'xxx' (errno: 2), this means that no table exists
in the current database with the name xxx.
Note that as MySQL uses directories and files to store databases and tables, the database and table names are case-sensitive! (On Windows the databases and tables names are not case-sensitive, but all references to a given table within a query must use the same case!)
You can check which tables you have in the current database with
SHOW TABLES. See section 4.5.7 SHOW Syntax.
Can't initialize character set xxx errorIf you get an error like:
MySQL Connection Failed: Can't initialize character set xxx
This means one of the following things:
--with-charset=xxx or with --with-extra-charsets=xxx.
See section 2.3.3 Typical configure Options.
All standard MySQL binaries are compiled with
--with-extra-character-sets=complex which will enable support for
all multi-byte character sets. See section 4.6.1 The Character Set Used for Data and Sorting.
mysqld and the character set definition files are not in the place
where the client expects to find them.
In this case you need to:
configure Options.
--character-sets-dir=path-to-charset-dir option.
If you get ERROR '...' not found (errno: 23), Can't open
file: ... (errno: 24), or any other error with errno 23 or
errno 24 from MySQL, it means that you haven't allocated
enough file descriptors for MySQL. You can use the
perror utility to get a description of what the error number
means:
shell> perror 23 File table overflow shell> perror 24 Too many open files shell> perror 11 Resource temporarily unavailable
The problem here is that mysqld is trying to keep open too many
files simultaneously. You can either tell mysqld not to open so
many files at once or increase the number of file descriptors
available to mysqld.
To tell mysqld to keep open fewer files at a time, you can make
the table cache smaller by using the -O table_cache=32 option to
safe_mysqld (the default value is 64). Reducing the value of
max_connections will also reduce the number of open files (the
default value is 90).
To change the number of file descriptors available to mysqld, you
can use the option --open-files-limit=# to safe_mysqld or
-O open-files-limit=# to mysqld.
See section 4.5.7.4 SHOW VARIABLES.
The easiest way to do that is to add the option to your option file.
See section 4.1.2 `my.cnf' Option Files. If you have an old mysqld version that
doesn't support this, you can edit the safe_mysqld script. There
is a commented-out line ulimit -n 256 in the script. You can
remove the '#' character to uncomment this line, and change the
number 256 to affect the number of file descriptors available to
mysqld.
ulimit (and open-files-limit) can increase the number of
file descriptors, but only up to the limit imposed by the operating
system. There is also a 'hard' limit that can only be overridden if you
start safe_mysqld or mysqld as root (just remember that
you need to also use the --user=... option in this case). If you
need to increase the OS limit on the number of file descriptors
available to each process, consult the documentation for your operating
system.
Note that if you run the tcsh shell, ulimit will not work!
tcsh will also report incorrect values when you ask for the current
limits! In this case you should start safe_mysqld with sh!
If you are linking your program and you get errors for unreferenced
symbols that start with mysql_, like the following:
/tmp/ccFKsdPa.o: In function `main': /tmp/ccFKsdPa.o(.text+0xb): undefined reference to `mysql_init' /tmp/ccFKsdPa.o(.text+0x31): undefined reference to `mysql_real_connect' /tmp/ccFKsdPa.o(.text+0x57): undefined reference to `mysql_real_connect' /tmp/ccFKsdPa.o(.text+0x69): undefined reference to `mysql_error' /tmp/ccFKsdPa.o(.text+0x9a): undefined reference to `mysql_close'
you should be able to solve this by adding -Lpath-to-the-mysql-library
-lmysqlclient last on your link line.
If you get undefined reference errors for the uncompress
or compress function, add -lz last on your link
line and try again!
If you get undefined reference errors for functions that should
exist on your system, like connect, check the man page for the
function in question, for which libraries you should add to the link
line!
If you get undefined reference errors for functions that don't
exist on your system, like the following:
mf_format.o(.text+0x201): undefined reference to `__lxstat'
it usually means that your library is compiled on a system that is not 100% compatible with yours. In this case you should download the latest MySQL source distribution and compile this yourself. See section 2.3 Installing a MySQL Source Distribution.
If you are trying to run a program and you then get errors for
unreferenced symbols that start with mysql_ or that the
mysqlclient library can't be found, this means that your system
can't find the share `libmysqlclient.so' library.
The fix for this is to tell your system to search after shared libraries where the library is located by one of the following methods:
LD_LIBRARY_PATH environment variable.
LD_LIBRARY environment variable.
ldconfig.
Another way to solve this problem is to link your program statically, with
-static, or by removing the dynamic MySQL libraries
before linking your code. In the second case you should be
sure that no other programs are using the dynamic libraries!
The MySQL server mysqld can be started and run by any user.
In order to change mysqld to run as a Unix user user_name, you must
do the following:
mysqladmin shutdown).
user_name has
privileges to read and write files in them (you may need to do this as
the Unix root user):
shell> chown -R user_name /path/to/mysql/datadirIf directories or files within the MySQL data directory are symlinks, you'll also need to follow those links and change the directories and files they point to.
chown -R may not follow symlinks for
you.
user_name, or, if you are using
MySQL Version 3.22 or later, start mysqld as the Unix root
user and use the --user=user_name option. mysqld will switch
to run as the Unix user user_name before accepting any connections.
user line that specifies the user name to
the [mysqld] group of the `/etc/my.cnf' option file or the
`my.cnf' option file in the server's data directory. For example:
[mysqld] user=user_name
At this point, your mysqld process should be running fine and dandy as
the Unix user user_name. One thing hasn't changed, though: the
contents of the permissions tables. By default (right after running the
permissions table install script mysql_install_db), the MySQL
user root is the only user with permission to access the mysql
database or to create or drop databases. Unless you have changed those
permissions, they still hold. This shouldn't stop you from accessing
MySQL as the MySQL root user when you're logged in
as a Unix user other than root; just specify the -u root option
to the client program.
Note that accessing MySQL as root, by supplying -u
root on the command-line, has nothing to do with MySQL running
as the Unix root user, or, indeed, as another Unix user. The access
permissions and user names of MySQL are completely separate from
Unix user names. The only connection with Unix user names is that if you
don't provide a -u option when you invoke a client program, the client
will try to connect using your Unix login name as your MySQL user
name.
If your Unix box itself isn't secured, you should probably at least put a
password on the MySQL root users in the access tables.
Otherwise, any user with an account on that machine can run mysql -u
root db_name and do whatever he likes.
If you have problems with file permissions, for example, if mysql
issues the following error message when you create a table:
ERROR: Can't find file: 'path/with/filename.frm' (Errcode: 13)
then the environment variable UMASK might be set incorrectly when
mysqld starts up. The default umask value is 0660. You can
change this behaviour by starting safe_mysqld as follows:
shell> UMASK=384 # = 600 in octal shell> export UMASK shell> /path/to/safe_mysqld &
By default MySQL will create database and RAID
directories with permission type 0700. You can modify this behaviour by
setting the UMASK_DIR variable. If you set this, new
directories are created with the combined UMASK and
UMASK_DIR. For example, if you want to give group access to
all new directories, you can do:
shell> UMASK_DIR=504 # = 770 in octal shell> export UMASK_DIR shell> /path/to/safe_mysqld &
In MySQL Version 3.23.25 and above, MySQL assumes that the
value for UMASK and UMASK_DIR is in octal if it starts
with a zero.
See section F Environment Variables.
All MySQL versions are tested on many platforms before they are released. This doesn't mean that there aren't any bugs in MySQL, but it means if there are bugs, they are very few and can be hard to find. If you have a problem, it will always help if you try to find out exactly what crashes your system, as you will have a much better chance of getting this fixed quickly.
First, you should try to find out whether the problem is that the
mysqld daemon dies or whether your problem has to do with your
client. You can check how long your mysqld server has been up by
executing mysqladmin version. If mysqld has died, you may
find the reason for this in the file
`mysql-data-directory/`hostname`.err'. See section 4.9.1 The Error Log.
On some ystems you can in this file find a stack trace of where mysqld
died that you can resolve with resolve_back_stack. See section E.1.4 Using a Stack Trace. Note that the variable values written in the .err
file may not always be 100 % correct.
Many crashes of MySQL are caused by corrupted index / data
files. MySQL will update the data on disk, with the
write() system call, after every SQL statement and before the
client is notified about the result. (This is not true if you are running
with delay_key_write, in which case only the data is written.)
This means that the data is safe even if mysqld crashes, as the OS will
ensure that the not flushed data is written to disk. You can force
MySQL to sync everything to disk after every SQL command by
starting mysqld with --flush.
The above means that normally you shouldn't get corrupted tables unless:
mysqld or the machine in the middle
of an update.
mysqld that caused it to die in the
middle of an update.
mysqld servers on the same data on a
system that doesn't support good filesystem locks (normally handled by
the lockd daemon ) or if you are running
multiple servers with --skip-external-locking
mysqld confused.
ALTER TABLE on a
repaired copy of the table!
Because it is very difficult to know why something is crashing, first try to check whether things that work for others crash for you. Please try the following things:
mysqld daemon with mysqladmin shutdown, run
myisamchk --silent --force */*.MYI on all tables, and restart the
mysqld daemon. This will ensure that you are running from a clean
state. See section 4 Database Administration.
mysqld --log and try to determine from the information in the log
whether some specific query kills the server. About 95% of all bugs are
related to a particular query! Normally this is one of the last queries in
the log file just before MySQL restarted. See section 4.9.2 The General Query Log.
If you can repeatedly kill MySQL with one of the queries, even
when you have checked all tables just before doing the query, then you
have been able to locate the bug and should do a bug report for this!
See section 1.7.1.3 How to Report Bugs or Problems.
fork_test.pl and fork2_test.pl.
--with-debug option or
--with-debug=full to configure and then recompile.
See section E.1 Debugging a MySQL server.
--skip-external-locking option to mysqld. On some
systems, the lockd lock manager does not work properly; the
--skip-external-locking option tells mysqld not to use external
locking. (This means that you cannot run 2 mysqld servers on the same
data and that you must be careful if you use myisamchk, but it may be
instructive to try the option as a test.)
mysqladmin -u root processlist when mysqld
appears to be running but not responding? Sometimes mysqld is not
comatose even though you might think so. The problem may be that all
connections are in use, or there may be some internal lock problem.
mysqladmin processlist will usually be able to make a connection even
in these cases, and can provide useful information about the current number
of connections and their status.
mysqladmin -i 5 status or mysqladmin -i 5
-r status or in a separate window to produce statistics while you run
your other queries.
mysqld from gdb (or in another debugger).
See section E.1.3 Debugging mysqld under gdb.
mysqld has crashed inside
gdb:
backtrace info local up info local up info localWith gdb you can also examine which threads exist with
info
threads and switch to a specific thread with thread #, where
# is the thread id.
BLOB/TEXT columns (but only VARCHAR columns), you
can try to change all VARCHAR to CHAR with ALTER
TABLE. This will force MySQL to use fixed-size rows.
Fixed-size rows take a little extra space, but are much more tolerant to
corruption!
The current dynamic row code has been in use at MySQL AB for at
least 3 years without any problems, but by nature dynamic-length rows are
more prone to errors, so it may be a good idea to try the above to see if
it helps!
If you never set a root password for MySQL, then the server will
not require a password at all for connecting as root. It is
recommended to always set a password for each user. See section 4.2.2 How to Make MySQL Secure Against Crackers.
If you have set a root password, but forgot what it was, you can
set a new password with the following procedure:
mysqld server by sending a kill (not kill
-9) to the mysqld server. The pid is stored in a `.pid'
file, which is normally in the MySQL database directory:
shell> kill `cat /mysql-data-directory/hostname.pid`You must be either the Unix
root user or the same user mysqld
runs as to do this.
mysqld with the --skip-grant-tables option.
mysqladmin password command:
shell> mysqladmin -u root password 'mynewpassword'
mysqld and restart it normally,
or just load the privilege tables with:
shell> mysqladmin -h hostname flush-privileges
Alternatively, you can set the new password using the mysql client:
mysqld with the --skip-grant-tables
option as described above.
mysqld server with:
shell> mysql -u root mysql
mysql client:
mysql> UPDATE user SET Password=PASSWORD('mynewpassword')
-> WHERE User='root';
mysql> FLUSH PRIVILEGES;
mysqld and restart it normally.
When a disk-full condition occurs, MySQL does the following:
To alleviate the problem, you can take the following actions:
mysqladmin kill to the thread.
The thread will be aborted the next time it checks the disk (in 1 minute).
Exceptions to the above behaveour is when you use REPAIR or
OPTIMIZE or when the indexes are created in a batch after an
LOAD DATA INFILE or after an ALTER TABLE statement.
All of the above commands may use big temporary files that left to
themself would cause big problems for the rest of the system. If
MySQL gets disk full while doing any of the above operations,
it will remove the big temporary files and mark the table as crashed
(except for ALTER TABLE, in which the old table will be left
unchanged).
MySQL uses the value of the TMPDIR environment variable as
the pathname of the directory in which to store temporary files. If you don't
have TMPDIR set, MySQL uses the system default, which is
normally `/tmp' or `/usr/tmp'. If the filesystem containing your
temporary file directory is too small, you should edit safe_mysqld to
set TMPDIR to point to a directory in a filesystem where you have
enough space! You can also set the temporary directory using the
--tmpdir option to mysqld.
MySQL creates all temporary files as hidden files. This ensures
that the temporary files will be removed if mysqld is terminated. The
disadvantage of using hidden files is that you will not see a big temporary
file that fills up the filesystem in which the temporary file directory is
located.
When sorting (ORDER BY or GROUP BY), MySQL normally
uses one or two temporary files. The maximum disk-space needed is:
(length of what is sorted + sizeof(database pointer)) * number of matched rows * 2
sizeof(database pointer) is usually 4, but may grow in the future for
really big tables.
For some SELECT queries, MySQL also creates temporary SQL
tables. These are not hidden and have names of the form `SQL_*'.
ALTER TABLE creates a temporary table in the same directory as
the original table.
If you use MySQL 4.1 or later you can spread load between
several physical disks by setting --tmpdir to a list of paths
separated by colon : (semicolon ; on Windows). They
will be used in round-robin fashion.
Note: These paths should end up on different physical disks,
not different partitions of the same disk.
If you have problems with the fact that anyone can delete the
MySQL communication socket `/tmp/mysql.sock', you can,
on most versions of Unix, protect your `/tmp' filesystem by setting
the sticky bit on it. Log in as root and do the following:
shell> chmod +t /tmp
This will protect your `/tmp' filesystem so that files can be deleted
only by their owners or the superuser (root).
You can check if the sticky bit is set by executing ls -ld /tmp.
If the last permission bit is t, the bit is set.
You can change the place where MySQL uses / puts the socket file the following ways:
/etc/my.cnf:
[client] socket=path-for-socket-file [mysqld] socket=path-for-socket-fileSee section 4.1.2 `my.cnf' Option Files.
safe_mysqld and most
clients with the --socket=path-for-socket-file option.
MYSQL_UNIX_PORT environment
variable.
configure option
--with-unix-socket-path=path-for-socket-file. See section 2.3.3 Typical configure Options.
You can test that the socket works with this command:
shell> mysqladmin --socket=/path/to/socket version
If you have a problem with SELECT NOW() returning values in GMT and
not your local time, you have to set the TZ environment variable to
your current time zone. This should be done for the environment in which
the server runs, for example, in safe_mysqld or mysql.server.
See section F Environment Variables.
By default, MySQL searches are case-insensitive (although there are
some character sets that are never case-insensitive, such as czech).
That means that if you search with col_name LIKE 'a%', you will get all
column values that start with A or a. If you want to make this
search case-sensitive, use something like INSTR(col_name, "A")=1 to
check a prefix. Or use STRCMP(col_name, "A") = 0 if the column value
must be exactly "A".
Simple comparison operations (>=, >, = , < , <=, sorting and
grouping) are based on each character's ``sort value''. Characters with
the same sort value (like E, e and é) are treated as the same character!
In older MySQL versions LIKE comparisons were done on
the uppercase value of each character (E == e but E <> é). In newer
MySQL versions LIKE works just like the other comparison
operators.
If you want a column always to be treated in case-sensitive fashion,
declare it as BINARY. See section 6.5.3 CREATE TABLE Syntax.
If you are using Chinese data in the so-called big5 encoding, you want to
make all character columns BINARY. This works because the sorting
order of big5 encoding characters is based on the order of ASCII codes.
DATE Columns
The format of a DATE value is 'YYYY-MM-DD'. According to
standard SQL, no other format is allowed. You should use this format in UPDATE
expressions and in the WHERE clause of SELECT statements. For
example:
mysql> SELECT * FROM tbl_name WHERE date >= '1997-05-05';
As a convenience, MySQL automatically converts a date to a number if
the date is used in a numeric context (and vice versa). It is also smart
enough to allow a ``relaxed'' string form when updating and in a WHERE
clause that compares a date to a TIMESTAMP, DATE, or a
DATETIME column. (Relaxed form means that any punctuation character
may be used as the separator between parts. For example, '1998-08-15'
and '1998#08#15' are equivalent.) MySQL can also convert a
string containing no separators (such as '19980815'), provided it
makes sense as a date.
The special date '0000-00-00' can be stored and retrieved as
'0000-00-00'. When using a '0000-00-00' date through
MyODBC, it will automatically be converted to NULL in
MyODBC Version 2.50.12 and above, because ODBC can't handle this kind of
date.
Because MySQL performs the conversions described above, the following statements work:
mysql> INSERT INTO tbl_name (idate) VALUES (19970505);
mysql> INSERT INTO tbl_name (idate) VALUES ('19970505');
mysql> INSERT INTO tbl_name (idate) VALUES ('97-05-05');
mysql> INSERT INTO tbl_name (idate) VALUES ('1997.05.05');
mysql> INSERT INTO tbl_name (idate) VALUES ('1997 05 05');
mysql> INSERT INTO tbl_name (idate) VALUES ('0000-00-00');
mysql> SELECT idate FROM tbl_name WHERE idate >= '1997-05-05';
mysql> SELECT idate FROM tbl_name WHERE idate >= 19970505;
mysql> SELECT MOD(idate,100) FROM tbl_name WHERE idate >= 19970505;
mysql> SELECT idate FROM tbl_name WHERE idate >= '19970505';
However, the following will not work:
mysql> SELECT idate FROM tbl_name WHERE STRCMP(idate,'19970505')=0;
STRCMP() is a string function, so it converts idate to
a string and performs a string comparison. It does not convert
'19970505' to a date and perform a date comparison.
Note that MySQL does very limited checking whether the date is
correct. If you store an incorrect date, such as '1998-2-31', the
wrong date will be stored.
Because MySQL packs dates for storage, it can't store any given date as it would not fit onto the result buffer. The rules for accepting a date are:
DATE and DATETIME columns.
DATE column and you only know part
of the date.
If the date cannot be converted to any reasonable value, a 0 is
stored in the DATE field, which will be retrieved as
0000-00-00. This is both a speed and convenience issue as we
believe that the database's responsibility is to retrieve the same date
you stored (even if the data was not logically correct in all cases).
We think it is up to the application to check the dates, and not the server.
NULL Values
The concept of the NULL value is a common source of confusion for
newcomers to SQL, who often think that NULL is the same thing as an
empty string "". This is not the case! For example, the following
statements are completely different:
mysql> INSERT INTO my_table (phone) VALUES (NULL);
mysql> INSERT INTO my_table (phone) VALUES ("");
Both statements insert a value into the phone column, but the first
inserts a NULL value and the second inserts an empty string. The
meaning of the first can be regarded as ``phone number is not known'' and the
meaning of the second can be regarded as ``she has no phone''.
In SQL, the NULL value is always false in comparison to any
other value, even NULL. An expression that contains NULL
always produces a NULL value unless otherwise indicated in
the documentation for the operators and functions involved in the
expression. All columns in the following example return NULL:
mysql> SELECT NULL,1+NULL,CONCAT('Invisible',NULL);
If you want to search for column values that are NULL, you
cannot use the =NULL test. The following statement returns no
rows, because expr = NULL is FALSE, for any expression:
mysql> SELECT * FROM my_table WHERE phone = NULL;
To look for NULL values, you must use the IS NULL test.
The following shows how to find the NULL phone number and the
empty phone number:
mysql> SELECT * FROM my_table WHERE phone IS NULL; mysql> SELECT * FROM my_table WHERE phone = "";
Note that you can only add an index on a column that can have NULL
values if you are using MySQL Version 3.23.2 or newer and are using the
MyISAM or InnoDB table type.
In earlier versions and with other table types, you must declare such
columns NOT NULL. This also means you cannot then insert
NULL into an indexed column.
When reading data with LOAD DATA INFILE, empty columns are updated
with ''. If you want a NULL value in a column, you should use
\N in the text file. The literal word 'NULL' may also be used
under some circumstances.
See section 6.4.9 LOAD DATA INFILE Syntax.
When using ORDER BY, NULL values are presented first.
In versions prior to 4.0.2, if you sort in descending order using
DESC, NULL values are presented last.
When using GROUP BY, all NULL values are regarded as equal.
To help with NULL handling, you can use the IS NULL and
IS NOT NULL operators and the IFNULL() function.
For some column types, NULL values are handled specially. If you
insert NULL into the first TIMESTAMP column of a table, the
current date and time is inserted. If you insert NULL into an
AUTO_INCREMENT column, the next number in the sequence is inserted.
alias
You can use an alias to refer to a column in the GROUP BY,
ORDER BY, or in the HAVING part. Aliases can also be used
to give columns better names:
SELECT SQRT(a*b) as rt FROM table_name GROUP BY rt HAVING rt > 0; SELECT id,COUNT(*) AS cnt FROM table_name GROUP BY id HAVING cnt > 0; SELECT id AS "Customer identity" FROM table_name;
Note that standard SQL doesn't allow you to refer to an alias in a
WHERE clause. This is because when the WHERE code is
executed the column value may not yet be determined. For example, the
following query is illegal:
SELECT id,COUNT(*) AS cnt FROM table_name WHERE cnt > 0 GROUP BY id;
The WHERE statement is executed to determine which rows should
be included in the GROUP BY part while HAVING is used to
decide which rows from the result set should be used.
As MySQL doesn't support subqueries (prior to Version 4.1), nor the use of more
than one table in the DELETE statement (prior to Version 4.0), you
should use the following approach to delete rows from 2 related tables:
SELECT the rows based on some WHERE condition in the main table.
DELETE the rows in the main table based on the same condition.
DELETE FROM related_table WHERE related_column IN (selected_rows).
If the total number of characters in the query with
related_column is more than 1,048,576 (the default value of
max_allowed_packet, you should split it into smaller parts and
execute multiple DELETE statements. You will probably get the
fastest DELETE by only deleting 100-1000 related_column
ids per query if the related_column is an index. If the
related_column isn't an index, the speed is independent of the
number of arguments in the IN clause.
If you have a complicated query that has many tables and that doesn't return any rows, you should use the following procedure to find out what is wrong with your query:
EXPLAIN and check if you can find something that is
obviously wrong. See section 5.2.1 EXPLAIN Syntax (Get Information About a SELECT).
WHERE clause.
LIMIT 10 with the query.
SELECT for the column that should have matched a row against
the table that was last removed from the query.
FLOAT or DOUBLE columns with numbers that
have decimals, you can't use '='. This problem is common in most
computer languages because floating-point values are not exact values.
In most cases, changing the FLOAT to a DOUBLE will fix this.
See section A.5.7 Problems with Floating-Point Comparison.
mysql test < query.sql that shows your problems.
You can create a test file with mysqldump --quick database tables > query.sql. Open the file in an editor, remove some insert lines (if there are
too many of these), and add your select statement at the end of the file.
Test that you still have your problem by doing:
shell> mysqladmin create test2 shell> mysql test2 < query.sqlPost the test file using
mysqlbug to mysql@lists.mysql.com.
floating-point numbers cause confusion sometimes, because these numbers are not stored as exact values inside computer architecture. What one can see on the screen usually is not the exact value of the number.
Field types FLOAT, DOUBLE and DECIMAL are such.
CREATE TABLE t1 (i INT, d1 DECIMAL(9,2), d2 DECIMAL(9,2));
INSERT INTO t1 VALUES (1, 101.40, 21.40), (1, -80.00, 0.00),
(2, 0.00, 0.00), (2, -13.20, 0.00), (2, 59.60, 46.40),
(2, 30.40, 30.40), (3, 37.00, 7.40), (3, -29.60, 0.00),
(4, 60.00, 15.40), (4, -10.60, 0.00), (4, -34.00, 0.00),
(5, 33.00, 0.00), (5, -25.80, 0.00), (5, 0.00, 7.20),
(6, 0.00, 0.00), (6, -51.40, 0.00);
mysql> SELECT i, SUM(d1) AS a, SUM(d2) AS b
-> FROM t1 GROUP BY i HAVING a <> b;
+------+--------+-------+
| i | a | b |
+------+--------+-------+
| 1 | 21.40 | 21.40 |
| 2 | 76.80 | 76.80 |
| 3 | 7.40 | 7.40 |
| 4 | 15.40 | 15.40 |
| 5 | 7.20 | 7.20 |
| 6 | -51.40 | 0.00 |
+------+--------+-------+
The result is correct. Although the first five records look like they shouldn't pass the comparison test, they may do so because the difference between the numbers show up around tenth decimal, or so depending on computer architecture.
The problem cannot be solved by using ROUND() (or similar function), because the result is still a floating-point number. Example:
mysql> SELECT i, ROUND(SUM(d1), 2) AS a, ROUND(SUM(d2), 2) AS b
-> FROM t1 GROUP BY i HAVING a <> b;
+------+--------+-------+
| i | a | b |
+------+--------+-------+
| 1 | 21.40 | 21.40 |
| 2 | 76.80 | 76.80 |
| 3 | 7.40 | 7.40 |
| 4 | 15.40 | 15.40 |
| 5 | 7.20 | 7.20 |
| 6 | -51.40 | 0.00 |
+------+--------+-------+
This is what the numbers in column 'a' look like:
mysql> SELECT i, ROUND(SUM(d1), 2)*1.0000000000000000 AS a,
-> ROUND(SUM(d2), 2) AS b FROM t1 GROUP BY i HAVING a <> b;
+------+----------------------+-------+
| i | a | b |
+------+----------------------+-------+
| 1 | 21.3999999999999986 | 21.40 |
| 2 | 76.7999999999999972 | 76.80 |
| 3 | 7.4000000000000004 | 7.40 |
| 4 | 15.4000000000000004 | 15.40 |
| 5 | 7.2000000000000002 | 7.20 |
| 6 | -51.3999999999999986 | 0.00 |
+------+----------------------+-------+
Depending on the computer architecture you may or may not see similar results. Each CPU may evaluate floating-point numbers differently. For example in some machines you may get 'right' results by multiplaying both arguments with 1, an example follows.
WARNING: NEVER TRUST THIS METHOD IN YOUR APPLICATION, THIS IS AN EXAMPLE OF A WRONG METHOD!!!
mysql> SELECT i, ROUND(SUM(d1), 2)*1 AS a, ROUND(SUM(d2), 2)*1 AS b
-> FROM t1 GROUP BY i HAVING a <> b;
+------+--------+------+
| i | a | b |
+------+--------+------+
| 6 | -51.40 | 0.00 |
+------+--------+------+
The reason why the above example seems to be working is that on the particular machine where the test was done, the CPU floating-point arithmetics happens to round the numbers to same, but there is no rule that any CPU should do so, so it cannot be trusted.
The correct way to do floating-point number comparison is to first decide on what is the wanted tolerance between the numbers and then do the comparison against the tolerance number. For example, if we agree on that floating-point numbers should be regarded the same, if they are same with precision of one of ten thousand (0.0001), the comparison should be done like this:
mysql> SELECT i, SUM(d1) AS a, SUM(d2) AS b FROM t1
-> GROUP BY i HAVING ABS(a - b) > 0.0001;
+------+--------+------+
| i | a | b |
+------+--------+------+
| 6 | -51.40 | 0.00 |
+------+--------+------+
1 row in set (0.00 sec)
And vice versa, if we wanted to get rows where the numbers are the same, the test would be:
mysql> SELECT i, SUM(d1) AS a, SUM(d2) AS b FROM t1
-> GROUP BY i HAVING ABS(a - b) < 0.0001;
+------+-------+-------+
| i | a | b |
+------+-------+-------+
| 1 | 21.40 | 21.40 |
| 2 | 76.80 | 76.80 |
| 3 | 7.40 | 7.40 |
| 4 | 15.40 | 15.40 |
| 5 | 7.20 | 7.20 |
+------+-------+-------+
ALTER TABLE.
ALTER TABLE changes a table to the current character set.
If you get a duplicate key error during ALTER TABLE, then the cause
is either that the new character sets maps two keys to the same value
or that the table is corrupted, in which case you should run
REPAIR TABLE on the table.
If ALTER TABLE dies with an error like this:
Error on rename of './database/name.frm' to './database/B-a.frm' (Errcode: 17)
the problem may be that MySQL has crashed in a previous ALTER
TABLE and there is an old table named `A-something' or
`B-something' lying around. In this case, go to the MySQL data
directory and delete all files that have names starting with A- or
B-. (You may want to move them elsewhere instead of deleting them.)
ALTER TABLE works the following way:
If something goes wrong with the renaming operation, MySQL tries to undo the changes. If something goes seriously wrong (this shouldn't happen, of course), MySQL may leave the old table as `B-xxx', but a simple rename on the system level should get your data back.
The whole point of SQL is to abstract the application from the data storage format. You should always specify the order in which you wish to retrieve your data. For example:
SELECT col_name1, col_name2, col_name3 FROM tbl_name;
will return columns in the order col_name1, col_name2, col_name3, whereas:
SELECT col_name1, col_name3, col_name2 FROM tbl_name;
will return columns in the order col_name1, col_name3, col_name2.
If you want to change the order of columns anyway, you can do it as follows:
INSERT INTO new_table SELECT fields-in-new_table-order FROM old_table.
old_table.
ALTER TABLE new_table RENAME old_table.
You should never, in an application, use SELECT * and
retrieve the columns based on their position, because the order and
position in which columns are returned cannot may not remain
the same (if you add/move/delete columns). A simple change to your
database structure would then cause your application to fail.
Of course SELECT * is quite suitable for testing queries.
The following are a list of the limitations with TEMPORARY TABLES.
HEAP, ISAM,
MyISAM, MERGE, or InnoDB.
mysql> SELECT * FROM temporary_table, temporary_table AS t2;
RENAME on a TEMPORARY table.
Note that ALTER TABLE org_name RENAME new_name works!
Many users of MySQL have contributed very useful support tools and add-ons.
A list of some software available from the MySQL website (or any mirror) is shown here.
You can also visit our online listing of MySQL-related software at http://www.mysql.com/portal/software/. The community facilities there also allow for your input!
If you want to build MySQL support for the Perl DBI/DBD
interface, you should fetch the Data-Dumper, DBI, and
DBD-mysql files and install them.
See section 2.7 Perl Installation Comments.
Note: The programs listed here can be freely downloaded and used. They are copyrighted by their respective owners. Please see individual product documentation for more details on licensing and terms. MySQL AB assumes no liability for the correctness of the information in this chapter or for the proper operation of the programs listed herein.
libmysql.dll, by bsilva@umesd.k12.or.us.
TmySQL, a library to use MySQL with Delphi.
guile that allows guile to interact with SQL
databases. By Hal Roberts.
mydsn.dll. mydsn should be used to build
and remove the DSN registry file for the MyODBC driver in Coldfusion
applications. By Miguel Angel Solórzano.
PROCEDURE that can be loaded runtime.
mysqldump output to a C header file. By Harry Brueckner,
brueckner@mail.respublica.de.
access_to_mysql.txt, except that this
one is fully configurable, has better type conversion (including
detection of TIMESTAMP fields), provides warnings and suggestions
while converting, quotes all special characters in text and
binary data, and so on. It will also convert to mSQL v1 and v2,
and is free of charge for anyone. See
http://www.cynergi.net/exportsql/ for the latest version. By
Pedro Freire, support@cynergi.net. Note: Doesn't work with
Access2!
exportsql. By Brian Andrews.
Note: Doesn't work with Access2!
exportsql.txt. That is,
it imports data from MySQL into an Access database via
ODBC. This is very handy when combined with exportsql, because it lets you
use Access for all DB design and administration, and synchronise with
your actual MySQL server either way. Free of charge. See
http://www.netdive.com/freebies/importsql/ for any updates.
Created by Laurent Bossavit of NetDIVE.
Note: doesn't work with Access2!
mSQL to MySQL. By alfred@sb.net
mysqldump and pipe it to
the sqlconv.pl script. The script will parse through the
mysqldump output and will rearrange the fields so they can be
inserted into a new table. An example is when you want to create a new
table for a different site you are working on, but the table is just a
bit different (that is - fields in different order, etc.).
By Steve Shreeve.
This appendix lists the developers, contributors, and supporters that have helped to make MySQL what it is today.
These are the developers that are or have been employed by MySQL AB
to work on the MySQL database software, roughly in the order they
started to work with us. Following each developer is a small list of the
tasks that the developer is responsible for, or the accomplishments they
have made. All developers are involved in support.
mysqld).
mysys library.
ISAM and MyISAM libraries (B-tree index file
handlers with index compression and different record formats).
HEAP library. A memory table system with our superior full dynamic
hashing. In use since 1981 and published around 1984.
replace program (take a look at it, it's COOL!).
MyODBC, the ODBC driver for Windows95.
mSQL tools like msqlperl, DBD/DBI, and
DB2mysql.
crash-me and the foundation for the MySQL benchmarks.
texi2html.
mysys are left.
mysqlimport
PROCEDURE ANALYSE()
zlib) in the client/server protocol.
INSERT
mysqldump -e option
LOAD DATA LOCAL INFILE
SQL_CALC_FOUND_ROWS SELECT option
--max-user-connections=... option
net_read and net_write_timeout
GRANT/REVOKE and SHOW GRANTS FOR
UNION in 4.0
DELETE/UPDATE
MySQL++ C++ API and the MySQLGUI client.
CASE expression.
MD5() and COALESCE() functions.
RAID support for MyISAM tables.
SHOW CREATE TABLE.
mysql-bench
libmysqld, the embedded server.
MERGE library.
ALTER TABLE ... ORDER BY ....
UPDATE ... ORDER BY ....
DELETE ... ORDER BY ....
MySQLCC (MySQL Control Center)
SHA1(), AES_ENCRYPT() and AES_DECRYPT() functions.
While MySQL AB owns all copyrights in the MySQL server
and the MySQL manual, we wish to recognise those who have made
contributions of one kind or another to the MySQL distribution.
Contributors are listed here, in somewhat random order:
mysqlshutdown.exe and
mysqlwatch.exe
mSQL, but found that it couldn't
satisfy our purposes so instead we wrote a SQL interface to our
application builder Unireg. mysqladmin and mysql client are
programs that were largely influenced by their mSQL counterparts.
We have put a lot of effort into making the MySQL syntax a superset of
mSQL. Many of the API's ideas are borrowed from mSQL to
make it easy to port free mSQL programs to the MySQL API.
The MySQL software doesn't contain any code from mSQL.
Two files in the distribution (`client/insert_test.c' and
`client/select_test.c') are based on the corresponding (non-copyrighted)
files in the mSQL distribution, but are modified as examples showing
the changes necessary to convert code from mSQL to MySQL Server.
(mSQL is copyrighted David J. Hughes.)
WHERE column REGEXP regexp.
gcc), the libc library
(from which we have borrowed `strto.c' to get some code working in Linux),
and the readline library (for the mysql client).
mysqldump (previously msqldump, but ported and enhanced by
Monty).
DBD (Perl) interface.
mysqlhotcopy.
_MB character set macros and the ujis and sjis character sets.
mysqlaccess, a program to show the access rights for a user.
xmysql, a graphical X client for MySQL Server.
DBD::mysql module.
FROM_UNIXTIME() time formatting, ENCRYPT() functions, and
bison advisor.
Active mailing list member.
DBI/DBD. Have
been of great help with crash-me and running benchmarks. Some new
date functions. The mysql_setpermissions script.
DBI/DBD section in the manual.
CREATE FUNCTION and
DROP FUNCTION.
AGGREGATE extension to UDF functions.
mysqlaccess more secure.
pthread_mutex() for OS/2.
MERGE tables to handle INSERTS. Active member
on the MySQL mailing lists.
DECIMAL.
Author of mysql_tableinfo.
mysqli extension (API) for use with MySQL 4.1 and up.
Other contributors, bugfinders, and testers: James H. Thompson, Maurizio Menghini, Wojciech Tryc, Luca Berra, Zarko Mocnik, Wim Bonis, Elmar Haneke, jehamby@lightside, psmith@BayNetworks.com, duane@connect.com.au, Ted Deppner ted@psyber.com, Mike Simons, Jaakko Hyvatti.
And lots of bug report/patches from the folks on the mailing list.
A big tribute goes to those that help us answer questions on the
mysql@lists.mysql.com mailing list:
DBD-mysql questions.
xmysql-related questions and basic installation questions.
mysqlbug.
DBD, Linux, some SQL syntax questions.
While MySQL AB owns all copyrights in the MySQL server
and the MySQL manual, we wish to recognise the following companies,
which helped us finance the development of the MySQL server,
such as by paying us for developing a new feature or giving us hardware
for development of the MySQL server.
mysqld version.
--skip-show-database
This appendix lists the changes from version to version in the MySQL source code.
We are now working actively on MySQL 4.1 & 5.0 and will only provide critical bug fixes for MySQL 4.0 and MySQL 3.23. We update this section as we add new features, so that everybody can follow the development.
Our TODO section contains what further plans we have for 4.1 & 5.0. See section 1.9 MySQL and The Future (The TODO).
Note that we tend to update the manual at the same time we make changes to MySQL. If you find a version listed here that you can't find on the MySQL download page (http://www.mysql.com/downloads/), this means that the version has not yet been released!
The date mentioned with a release version is the date of the last BitKeeper ChangeSet that this particular release has been based on, not the date when the packages have been made available. The binaries are usually made available a few days after the date of the tagged ChangeSet - building and testing all packages takes some time.
For the time being, version 5.0 is only available in source code. See section 2.3.4 Installing from the Development Source Tree.
The following changelog shows what has already been done in the 5.0 tree:
SELECT INTO list_of_vars, which can be of mixed,
i.e., global and local type.
SET
@a=10; then SELECT @A; will now return 10. Of course,
the content of the variable is still case sensitive; only the name of
this variable is case insensitive.
Version 4.1 of the MySQL server includes many enhancements and new features. Binaries for this version are available for download at http://www.mysql.com/downloads/mysql-4.1.html.
SELECT * FROM t1 WHERE t1.a=(SELECT t2.b FROM t2); SELECT * FROM t1 WHERE (1,2,3) IN (SELECT a,b,c FROM t2);
SELECT t1.a FROM t1, (SELECT * FROM t2) t3 WHERE t1.a=t3.a;
INSERT ... ON DUPLICATE KEY UPDATE ... syntax. This allows you to
UPDATE an existing row if the insert would cause a duplicate value
in a PRIMARY or UNIQUE key. (REPLACE allows you to
overwrite an existing row, which is something entirely different.)
See section 6.4.3 INSERT Syntax.
GROUP_CONCAT() aggregate function.
See section 6.3.7 Functions for Use with GROUP BY Clauses.
BTREE index on HEAP tables.
SHOW WARNINGS shows warnings for the last command.
See section 4.5.7.9 SHOW WARNINGS | ERRORS.
CREATE [TEMPORARY] TABLE [IF NOT EXISTS] table LIKE table.
HELP command that can be used in the mysql
command line client (and other clients) to get help for SQL commands.
For a full list of changes, please refer to the changelog sections for each individual 4.1.x release.
Functionality added or changed:
SQL SQL_MODE=#, for a complex mode (like ANSI) we
now update the SQL_MODE variable to include all options the mode
requires.
ROLLUP, which gives
you summary rows for each GROUP BY level.
SQLSTATE codes for all server errors.
mysql_sqlstate() and
mysql_stmt_sqlstate() that returns the SQLSTATE error code for the
last error.
--lower-case-table-names=1 now also makes aliases case
insensitive. (Bug #534)
TIME columns with hours > 24 (days) were returned incorrectly to the client.
ANALYZE, OPTIMIZE, REPAIR, FLUSH (and its
equivalents invoked from mysqladmin) commands
are now stored in the binary log (hence are replicated to the slave),
except FLUSH LOGS, FLUSH MASTER, FLUSH SLAVE,
FLUSH TABLES WITH READ LOCK, and unless the optional
NO_WRITE_TO_BINLOG keyword
(or its alias LOCAL) was used (for a syntax example
see section 4.5.3 FLUSH Syntax).
RELAY_LOG_PURGE to enable/disable automatic
relay log purging.
LOAD DATA now produces warnings that can be fetched with
SHOW WARNINGS.
CREATE TABLE table_name (LIKE table_name2).
CREATE TABLE table_name (...) TYPE=storage_engine will give a
warning if storage engine is not honored.
Bugs fixed:
BEGIN, in the
first relay log). (Bug #53)
CONNECTION_ID() is now properly replicated (bug #177).
PASSWORD() function in 4.1 is now properly replicated
(bug #344).
UNION's
DERIVED TABLES when EXPLAIN is
used on a DERIVED TABLES with a join
DELETE with ORDER BY and
LIMIT caused by non initiated array of reference pointers.
USER() function caused by the error in the size of
the allocated string
GEOMETRY column type with a storage engine that does not support
it.
UNION caused by the empty select list and
a non-existent field being used in some of the sub-selects.
FLUSH LOGS was
issued on the master. (Bug #254)
Functionality added or changed:
localhost.
REPAIR of MyISAM tables now uses less temporary disk space when
sorting char columns.
DATE/DATETIME checking is now a bit stricter to support the
ability to automatically distinguish between date, datetime, and time
with microseconds. For example, dates of type YYYYMMDD HHMMDD are no
longer supported; one must either have separators between each
DATE/TIME part or not at all.
help
week in the mysql client and get help for the week()
function.
mysql_get_server_version().
record_in_range() method to MERGE tables to be
able to choose the right index when there are many to choose from.
RAND() and user variables @var.
ANSI_QUOTES on the fly.
EXPLAIN SELECT now can be killed. See section 4.5.6 KILL Syntax.
REPAIR TABLE now can be killed. See section 4.5.6 KILL Syntax.
USE|IGNORE|FORCE INDEX.
DROP TEMPORARY TABLE now only drops temporary tables and doesn't
end transactions.
UNION in derived tables.
TIMESTAMP is now returned as string of type 'YYYY-MM-DD HH:MM:DD'
and different timestamp lengths are not supported.
This change was necessary for SQL standards compliance. In a future
version, a further change will be made (backward compatible with this
change), allowing the timestamp length to indicate the desired number
of digits of fractions of a second.
MYSQL_FIELD structure.
CREATE TABLE foo (a INT not null primary key) the
PRIMARY word is now optional.
CREATE TABLE the attribute SERIAL is now an alias for
BIGINT NOT NULL AUTO_INCREMENT UNIQUE.
SELECT ... FROM DUAL is an alias for SELECT ....
(To be compatible with some other databases).
CHAR/VARCHAR it's now
automatically changed to TEXT or BLOB; One will get a
warning in this case.
BLOB/TEXT types with the
syntax BLOB(length) and TEXT(length). MySQL will
automatically change it to one of the internal BLOB/TEXT
types.
CHAR BYTE is an alias for CHAR BINARY.
VARCHARACTER is an alias for VARCHAR.
integer MOD integer and integer DIV integer.
SERIAL DEFAULT VALUE added as an alias for AUTO_INCREMENT.
TRUE and FALSE added as alias for 1 and 0, respectively.
SELECT .. LIMIT 0 to return proper row count for
SQL_CALC_FOUND_ROWS.
--tmpdir=dirname1:dirname2:dirname3.
SELECT * from t1 where t1.a=(SELECT t2.b FROM t2).
SELECT a.col1, b.col2
FROM (SELECT MAX(col1) AS col1 FROM root_table) a,
other_table b
WHERE a.col1=b.col1;
BTREE index on HEAP tables.
CREATE TABLE.
SHOW FULL COLUMNS FROM table_name shows column comments.
ALTER DATABASE.
SHOW [COUNT(*)] WARNINGS shows warnings from the last command.
CREATE TABLE
... SELECT by defining the column in the CREATE part.
CREATE TABLE foo (a tinyint not null) SELECT b+1 AS 'a' FROM bar;
expr SOUNDS LIKE expr same as SOUNDEX(expr)=SOUNDEX(expr).
VARIANCE(expr) returns the variance of expr
CREATE
[TEMPORARY] TABLE [IF NOT EXISTS] table (LIKE table). The table can
be either normal or temporary.
--reconnect and disable-reconnect for the
mysql client, to reconnect automatically or not if the
connection is lost.
START SLAVE (STOP SLAVE) no longer returns an error
if the slave is already started (stopped); it returns a
warning instead.
SLAVE START and SLAVE STOP are no longer accepted by the query
parser; use START SLAVE and STOP SLAVE instead.
Version 4.0 of the MySQL server includes many enhancements and new features:
InnoDB table type is now included in the standard binaries,
adding transactions, row-level locking, and foreign keys.
See section 7.5 InnoDB Tables.
MERGE tables, now supporting INSERTs and
AUTO_INCREMENT.
See section 7.2 MERGE Tables.
UNION syntax in SELECT.
See section 6.4.1.2 UNION Syntax.
DELETE statements.
See section 6.4.6 DELETE Syntax.
libmysqld, the embedded server library.
See section 8.1.15 libmysqld, the Embedded MySQL Server Library.
GRANT privilege options for even tighter control and
security.
See section 4.3.1 GRANT and REVOKE Syntax.
GRANT system, particularly
useful for ISPs and other hosting providers.
See section 4.3.6 Limiting user resources.
SET Syntax.
For a full list of changes, please refer to the changelog sections for each individual 4.0.x release.
Functionality added or changed:
read-only to mysqld to only allow updates from
slave threads or the SUPER user. (Original patch from Markus Benning)
Bugs fixed:
ALTER TABLE ... ENABLE/DISABLE KEYS could cause a core dump when
done on an INSERT DELAYED table.
INSERT ... SELECT in an auto_increment column
not replicate well. This bug is in the master, not in the slave. Bug #490.
INSERT ... SELECT inserted rows in a
non-transactional table, but failed at some point (for example because
of a 'Duplicate key' error), the query was not written to the binlog;
now it is written to the binlog, with its error code, as all other
queries are. About the slave-skip-errors option for how to
handle partially completed queries in the slave, see section 4.10.6 Replication Options in `my.cnf'. Bug #491.
Functionality added or changed:
PRIMARY KEY now implies NOT NULL. (Bug #390)
--enable-local-infile to match the Unix build configuration.
mysql-test-run. time does not
accept all required parameters on many platforms (e.g. QNX) and timing
the tests is not really required (it's not a benchmark anyway).
SHOW MASTER STATUS and SHOW SLAVE STATUS required the
SUPER privilege; now they accept REPLICATION CLIENT as well.
(Bug #343)
myisam_repair_threads variable to enable it.
See section 4.5.7.4 SHOW VARIABLES.
innodb_max_dirty_pages_pct variable which controls amount of
dirty pages allowed in InnoDB buffer pool.
CURRENT_USER() and "access denied" error messages now report hostname
exactly as it was specified in the GRANT command.
ANALYZE TABLE.
--new now changes binary items (0xFFDF) to be
treated as binary strings instead of numbers by default. This fixes some
problems with character sets where it's convenient to input the string
as a binary item. After this change you have to convert the binary
string to INTEGER with a CAST if you want to compare two
binary items with each other and know which one is bigger than the other.
SELECT CAST(0xfeff AS UNSIGNED) < CAST(0xff AS UNSIGNED).
This will be the default behaviour in MySQL 4.1. (Bug #152)
NATURAL LEFT JOIN, NATURAL RIGHT JOIN and
RIGHT JOIN when using many joined tables. The problem was that
the JOIN method was not always associated with the tables
surrounding the JOIN method. If you have a query that uses many
RIGHT JOIN or NATURAL ... JOINS you should check that they
work as you expected after upgrading MySQL to this version. (Bug #291)
delayed_insert_timeout on Linux (most modern glibc
libraries have a fixed pthread_cond_timedwait). (Bug #211)
max_insert_delayed_threads. (Bug #211)
UPDATE ... LIMIT to also count accepted, but not changed rows.
BIT_AND() and BIT_OR() now return an unsigned 64 bit value.
--log-warnings).
--skip-symlink and --use-symbolic-links and
replaced these with --symbolic-links.
innodb_flush_log_at_trx_commit was changed
from 0 to 1 to make InnoDB tables ACID by default. See section 7.5.3 InnoDB Startup Options.
SHOW KEYS to display keys that are disabled by
ALTER TABLE DISABLE KEYS command.
CREATE TABLE, first
try if the default table type exists before falling back to MyISAM.
MEMORY as an alias for HEAP.
rnd to my_rnd as the name was too generic
and is an exported symbol in libmysqlclient (thanks to Dennis Haney
for the initial patch).
mysqldump no longer silently deletes the binlogs when called with
--master-data or --first-slave;
while this behaviour was convenient for some
users, others may suffer from it. Now one has to explicitely ask for
this deletion with the new --delete-master-logs option.
replicate-wild-ignore-table=mysql.%)
to exclude mysql.user, mysql.host, mysql.db,
mysql.tables_priv and mysql.columns_priv from
replication, then GRANT and REVOKE will not be replicated.
Bugs fixed:
Access denied error message had wrong Using password
value. (Bug #398)
\* commands
inside backtick-quoted strings.
Unknown error when using UPDATE ... LIMIT. (Bug #373)
GROUP BY with constants. (Bug #387)
UNION and OUTER JOIN. (Bug #386)
UPDATE and the query required a
temporary table bigger than tmp_table_size. (Bug #286)
mysql_install_db with the -IN-RPM option for the Mac OS X
installation to not fail on systems with improperly configured hostname
configurations.
LOAD DATA INFILE will now read 000000 as a zero date instead as
"2000-00-00".
DELETE FROM table WHERE const_expression
always to delete the whole table (even if expression result was false).
(Bug #355)
FORMAT('nan',#). (Bug #284)
HAVING ... COUNT(DISTINCT ...).
*) in
MATCH ... AGAINST() in some complex joins.
REPAIR ... USE_FRM command, when used on read-only,
nonexisting table or a table with a crashed index file.
--no-defaults, with a prompt
that contained hostname and connection to non-existing db was requested
LEFT, RIGHT and MID when used with
multi-byte character sets and some GROUP BY queries. (Bug #314)
ORDER BY being discarded for some
DISTINCT queries. (Bug #275)
SET SQL_BIG_SELECTS=1 works as documented (This corrects
a new bug introduced in 4.0)
UPDATE ... ORDER BY. (Bug #241)
WHERE clause with constant
expression like in WHERE 1 AND (a=1 AND b=1).
SET SQL_BIG_SELECTS=1 works again.
SHOW GRANTS.
FULLTEXT index stopped working after ALTER TABLE
that converts TEXT field to CHAR. (Bug #283)
SELECT and wildcarded select list,
when user only had partial column SELECT privileges on the table.
SET PASSWORD.
NATURAL JOINs in the query.
SUM() didn't return NULL when there was no rows in result
or when all values was NULL.
--open-files-limit in
`mysqld_safe'. (Bug #264)
SHOW PROCESSLIST.
NAN in FORMAT(...) function ...
ALTER TABLE ENABLE / DISABLE KEYS which failed to
force a refresh of table data in the cache.
LOAD DATA INFILE for custom parameters
(ENCLOSED, TERMINATED and so on) and temporary tables
(Bugs #183 and #222).
FLUSH LOGS was
issued on the master. (Bug #254)
LOAD DATA INFILE IGNORE : when reading
the binary log, mysqlbinlog and the replication code read REPLACE
instead of IGNORE. This could make the slave's table
become different from the master's table. (Bug #218)
relay_log_space_limit was set to a too
small value. (Bug #79)
MyISAM when a row is inserted into a table with a
large number of NULL columns. Bug was caused by wrong calculation
of the record length, as the space required for storage of NULL
bits was not added to the total record length.
SELECT @nonexistent_variable caused the
error in client - server protocol due to net_printf() being sent to
the client twice.
SQL_BIG_SELECTS option.
SHOW PROCESSLIST which only displayed a localhost
in the "Host" column. This was caused by a glitch that only used
current thread info instead info from the linked list of threads.
multi-table-update for InnoDB tables as well.
multi-table-updates that caused some rows to be
updated several times.
mysqldump when it was called with
--master-data: the CHANGE MASTER TO commands appended to
the SQL dump had wrong coordinates. (Bug #159)
USER() was replicated
on the slave ; this caused segfault on the slave. (Bug
#178). USER() is still badly replicated on the slave (it is
replicated to "").
Functionality added or changed:
SHOW PROCESSLIST will now include the client TCP port after the
hostname to make it easier to know from which client the request
originated.
Bugs fixed:
sort_buffer variable.
INSERT INTO u SELECT ... FROM t was written too late to the
binary log if t was very frequently updated during the execution of
this query. This could cause a problem with mysqlbinlog or
replication. The master must be upgraded, not the slave. (Bug #136)
WHERE clause. (Bug #142)
multi-table updates with InnoDB
tables. This bug occurred as, in many cases, InnoDB tables can not
be updated "on the fly", but offsets to the records have to be stored in
a temporary table.
server
RPM subpackage. (Bug #141)
.MYI files.
BACKUP TABLE to overwrite existing files.
UPDATEs when user had all privileges
on the database where tables are located and there were any entries in
tables_priv table, i.e. grant_option was true.
TRUNCATE any table in the same database.
LOCK TABLE followed by DROP
TABLE in the same thread. In this case one could still kill the thread
with KILL.
LOAD DATA LOCAL INFILE was not properly written to the binary
log (hence not properly replicated). (Bug #82)
RAND() entries were not read correctly by mysqlbinlog from
the binary log which caused problems when restoring a table that was
inserted with RAND(). INSERT INTO t1 VALUES(RAND()). In
replication this worked ok.
SET SQL_LOG_BIN=0 was ignored for INSERT DELAYED
queries. (Bug #104)
SHOW SLAVE STATUS reported too old positions
(columns Relay_Master_Log_File and Exec_master_log_pos)
for the last executed statement from the master, if this statement
was the COMMIT of a transaction. The master must be upgraded for that,
not the slave. (Bug #52)
LOAD DATA INFILE was not replicated by the slave if
replicate_*_table was set on the slave. (Bug #86)
RESET SLAVE, the coordinates displayed by SHOW
SLAVE STATUS looked un-reset (though they were, but only
internally). (Bug #70)
LOAD DATA.
ANALYZE procedure with error.
CHAR(0) columns that could cause wrong
results from the query.
AUTO_INCREMENT column,
as a secondary column in a multi-column key (see section 3.5.9 Using AUTO_INCREMENT), when
data was inserted with INSERT ... SELECT or LOAD DATA into
an empty table.
STOP SLAVE didn't stop the slave until the slave
got one new command from the master (this bug has been fixed for MySQL 4.0.11
by releasing updated 4.0.11a Windows packages, which include this individual
fix on top of the 4.0.11 sources). (Bug #69)
LOAD DATA command
was issued with full table name specified, including database prefix.
pthread_attr_getstacksize on
HP-UX 10.20 (Patch was also included in 4.0.11a sources).
bigint test to not fail on some platforms (e.g. HP-UX and
Tru64) due to different return values of the atof() function.
rpl_rotate_logs test to not fail on certain platforms (e.g.
Mac OS X) due to a too long file name (changed slave-master-info.opt
to .slave-mi).
Functionality added or changed:
NULL is now sorted LAST if you use ORDER BY ... DESC
(as it was before before MySQL 4.0.2). This change was required to
comply with the SQL-99 standard. (The original change was made because we thought that
SQL-99 required NULL to be always sorted at the same position, but
this was wrong).
START TRANSACTION (SQL-99 syntax) as alias for BEGIN.
This is recommended to use instead of BEGIN to start a transaction.
OLD_PASSWORD() as a synonym for PASSWORD().
ALL in group functions.
INNER JOIN and JOIN syntaxes.
For example, SELECT * FROM t1 INNER JOIN t2 didn't work before.
Bugs fixed:
multi-table-delete and InnoDB tables.
BLOB NOT NULL columns used with IS NULL.
CREATE TABLE (...)
AUTO_INCREMENT=#.
MIN(key_column) could in some cases return NULL on a column
with NULL and other values.
MIN(key_column) and MAX(key_column) could in some cases
return wrong values when used in OUTER JOIN.
MIN(key_column) and MAX(key_column) could return wrong
values if one of the tables was empty.
INTERVAL,
CASE, FIELD, CONCAT_WS, ELT and
MAKE_SET functions.
--lower-case-table-names (default on Windows)
and you had tables or databases with mixed case on disk, then
executing SHOW TABLE STATUS followed with DROP DATABASE
or DROP TABLE could fail with Errcode 13.
Functionality added or changed:
--log-error[=file_name] to mysqld_safe and
mysqld. This option will force all error messages to be put in a
log file if the option --console is not given. On Windows
--log-error is enabled as default, with a default name of
host_name.err if the name is not specified.
Warning: to Note: in the log files.
GROUP BY ... ORDER BY NULL
then result is not sorted.
SHOW VARIABLES.
gethostbyaddr() to resolve a hostname. You can fix
this for earlier MySQL versions by starting mysqld with
--thread-stack=192K.
mysql_waitpid to the binary distribution and the
MySQL-client RPM subpackage (required for mysql-test-run).
MySQL RPM package to MySQL-server. When
updating from an older version, MySQL-server.rpm will simply replace
MySQL.rpm.
replicate_wild_do_table=db.% or
replicate_wild_ignore_table=db.%, these rules will be applied to
CREATE/DROP DATABASE too.
MASTER_POS_WAIT().
Bugs fixed:
rand() distribution from the first call.
mysqld to hang when a
table was opened with the HANDLER command and then
dropped without being closed.
NULL in an auto_increment field and also
uses LAST_INSERT_ID().
ORDER BY constant_expression.
mysqladmin --relative.
show status reported a strange number for
Open_files and Open_streams.
EXPLAIN on empty table.
LEFT JOIN that caused zero rows to be returned in
the case the WHERE condition was evaluated as FALSE after
reading const tables. (Unlikely condition).
FLUSH PRIVILEGES didn't correctly flush table/column privileges
when mysql.tables_priv is empty.
LOAD DATA INFILE one a file
that updated and auto_increment field with NULL or 0. This
bug only affected MySQL 4.0 masters (not slaves or MySQL 3.23 masters).
NOTE: If you have a slave that has replicated a file with
generated auto_increment fields then the slave data is corrupted and you
should reinitialise the affected tables from the master.
NOT NULL field to an
expression that returned NULL.
str LIKE "%other_str%" where str or
other_str contained characters >= 128.
LOAD DATA and InnoDB failed
with table full error the binary log was corrupted.
Functionality added or changed:
OPTIMIZE TABLE will for MyISAM tables treat all NULL
values as different when calculating cardinality. This helps in
optimising joins between tables where one of the tables has a lot of
NULL values in a indexed column:
SELECT * from t1,t2 where t1.a=t2.key_with_a_lot_of_null;
FORCE INDEX (key_list). This acts likes
USE INDEX (key_list) but with the addition that a table scan is
assumed to be VERY expensive. One bad thing with this is that it makes
FORCE a reserved word.
Bugs fixed:
LOAD DATA INFILE statement that
caused log rotation.
Functionality added or changed:
max_packet_length for libmysqld.c is now 1024*1024*1024.
max_allowed_packet in a file ready by
mysql_options(MYSQL_READ_DEFAULT_FILE).
for clients.
ON UPDATE CASCADE in
FOREIGN KEY constraints. See the InnoDB section in the manual
for the InnoDB changelog.
Bugs fixed:
ALTER TABLE with BDB tables.
QUOTE() function.
GROUP BY when used on BLOB column with NULL values.
NULLs in CASE ... WHEN ...
Functionality added or changed:
mysqlbug now also reports the compiler version used for building
the binaries (if the compiler supports the option --version).
Bugs fixed:
-DBIG_TABLES
on a 32 bit system.
mysql_drop_db() didn't check permissions properly so anyone could
drop another users database. DROP DATABASE is checked properly.
Functionality added or changed:
CHARACTER SET xxx and CHARSET=xxx
table options (to be able to read table dumps from 4.1).
IFNULL(A,B) is now set to be the
more 'general' of the types of A and B. (The order is
STRING, REAL or INTEGER).
Qcache_lowmem_prunes status variable (number of queries that were
deleted from cache because of low memory).
mysqlcheck so it can deal with table names containing dashes.
SHOW VARIABLES)
is no longer used when inserting small (less than 100) number of rows.
SELECT ... FROM merge_table WHERE indexed_column=constant_expr.
LOCALTIME and LOCALTIMESTAMP as synonyms for
NOW().
CEIL is now an alias for CEILING.
CURRENT_USER() function can be used to get a user@host
value as it was matched in the GRANT system.
See section 6.3.6.2 Miscellaneous Functions.
CHECK constraints to be compatible with SQL-99. This made
CHECK a reserved word. (Checking of CHECK constraints is
still not implemented).
CAST(... as CHAR).
LIMIT syntax:
SELECT ... LIMIT # OFFSET #
mysql_change_user() will now reset the connection to the state
of a fresh connect (Ie, ROLLBACK any active transaction, close
all temporary tables, reset all user variables etc..)
Bugs fixed:
multi table updates
--lower-case-table-names default on Mac OS X as the default
file system (HFS+) is case insensitive.
See section 6.1.3 Case Sensitivity in Names.
AUTOCOMMIT=0 mode didn't rotate binary log.
SELECT with joined tables with
ORDER BY and LIMIT clause when filesort had to be used.
In that case LIMIT was applied to filesort of one of the tables,
although it could not be.
This fix solved problems with LEFT JOIN too.
mysql_server_init() now makes a copy of all arguments. This fixes
a problem when using the embedded server in C# program.
libmysqlclient library
that allowed a malicious MySQL server to crash the client
application.
mysql_change_user() handling.
All users are strongly recommended to upgrade to version 4.0.6.
--chroot command-line option of
mysqld from working.
"..." in boolean full-text search.
OPTIMIZE TABLE to corrupt the table
under some rare circumstances.
LOCK TABLES now works together with multi-table-update and
multi-table-delete.
--replicate-do=xxx didn't work for UPDATE commands.
(Bug introduced in 4.0.0)
REPLACE, AUTO_INCREMENT,
INSERT INTO ... SELECT ... were fixed. See the InnoDB changelog
in the InnoDB section of the manual.
Functionality added or changed:
SHOW PROCESSLIST
command
WEEK() so that one can get
week number according to the ISO 8601 specification.
(Old code should still work).
INSERT DELAYED threads doesn't hang on Waiting for
INSERT when one sends a SIGHUP to mysqld.
AND works according to SQL-99 when it comes to
NULL handling. In practice, this only affects queries where you
do something like WHERE ... NOT (NULL AND 0).
mysqld will now resolve basedir to its full path (with
realpath()). This enables one to use relative symlinks to the
MySQL installation directory. This will however cause show
variables to report different directories on systems where there is
a symbolic link in the path.
IGNORE INDEX or USE INDEX.
to be ignored.
--use-frm option to mysqlcheck. When used with
REPAIR, it gets the table structure from the .frm file, so the
table can be repaired even if the .MYI header is corrupted.
MAX() optimisation when used with JOIN and
ON expressions.
BETWEEN behaviour changed (see section 6.3.1.2 Comparison Operators).
Now datetime_col BETWEEN timestamp AND timestamp should work
as expected.
TEMPORARY MERGE tables now.
DELETE FROM myisam_table now shrinks not only the `.MYD' file but
also the `.MYI' file.
--open-files-limit=# option to mysqld_safe
it's now passed on to mysqld.
EXPLAIN from 'where used' to
'Using where' to make it more in line with other output.
safe_show_database as it was no longer used.
automake 1.5 and
libtool 1.4.
--ignore-space) back to the
original --ignore-spaces in mysqlclient. (Both syntaxes will
work).
UPDATE privilege when using REPLACE.
DROP TEMPORARY TABLE ..., to be used to make
replication safer.
BEGIN/COMMIT are now stored in the binary log on
COMMIT and not stored if one does ROLLBACK. This fixes
some problems with non-transactional temporary tables used inside
transactions.
SELECT * FROM (t2 LEFT JOIN t3 USING (a)), t1 worked, but
not SELECT * FROM t1, (t2 LEFT JOIN t3 USING (a)). Note that
braces are simply removed, they do not change the way the join is
executed.
READ UNCOMMITTED and READ COMMITTED.
For a detailed InnoDB changelog, see section 7.5.15 InnoDB Change History
in this manual.
Bugs fixed:
MAX() optimisation when used with JOIN and
ON expressions.
INSERT DELAY threads don't hang on Waiting for
INSERT when one sends a SIGHUP to mysqld.
IGNORE INDEX or USE INDEX.
root user in mysqld_safe.
CHECK
or REPAIR.
GROUP BY queries that
didn't return any result.
mysqlshow to work properly with wildcarded database names and
with database names that contain underscores.
MyISAM crash when using dynamic-row tables with huge numbers of
packed fields.
BDB transactions.
MATCH
relevance calculations.
IN BOOLEAN MODE that made MATCH
to return incorrect relevance value in some complex joins.
MyISAM key length to a value
slightly less that 500. It is exactly 500 now.
GROUP BY on columns that may have a NULL value
doesn't always use disk based temporary tables.
--des-key-file argument to mysqld
is interpreted relative to the data directory if given as a relative pathname.
NULL has to be MyISAM. This was okay for 3.23, but not
needed in 4.*. This resulted in slowdown in many queries since 4.0.2.
ORDER BY ... LIMIT #
to not return all rows.
REPAIR TABLE and myisamchk
to corrupt FULLTEXT indexes.
mysql grant table database. Now queries
in this database are not cached in the query cache.
mysqld_safe for some shells.
MyISAM MERGE table has more than 2 ^ 32 rows and
MySQL was not compiled with -DBIG_TABLES.
ORDER BY ... DESC problems with InnoDB tables.
GRANT/REVOKE failed if hostname was given in
non-matching case.
LOAD DATA INFILE when setting a
timestamp to a string value of '0'.
myisamchk -R mode.
mysqld to crash on REVOKE.
ORDER BY when there is a constant in the SELECT
statement.
mysqld couldn't open the
privilege tables.
SET PASSWORD FOR ... closed the connection in case of errors (bug
from 4.0.3).
max_allowed_packet in mysqld to 1 GB.
INSERT on a table with an
AUTO_INCREMENT key which was not in the first part of the key.
LOAD DATA INFILE to not recreate index if the table had
rows from before.
AES_DECRYPT() with incorrect arguments.
--skip-ssl can now be used to disable SSL in the MySQL clients,
even if one is using other SSL options in an option file or previously
on the command line.
MATCH ... AGAINST( ... IN BOOLEAN MODE)
used with ORDER BY.
LOCK TABLES and CREATE TEMPORARY TABLES privilege on
the database level. One must run the mysql_fix_privilege_tables
script on old installations to activate these.
SHOW TABLE ... STATUS, compressed tables sometimes showed up as
dynamic.
SELECT @@[global|session].var_name didn't report
global | session in the result column name.
FLUSH LOGS in a circular
replication setup created an infinite number of binary log files.
Now a rotate-binary-log command in the binary log will not cause slaves
to rotate logs.
STOP EVENT from binary log when doing FLUSH LOGS.
SHOW NEW MASTER FOR SLAVE as this needs to be
completely changed in 4.1.
UNIQUE key) appeared in ORDER BY
part of SELECT DISTINCT.
--log-binary=a.b.c now properly strips off .b.c.
FLUSH LOGS removed numerical extension for all future update logs.
GRANT ... REQUIRE didn't store the SSL information in the
mysql.user table if SSL was not enabled in the server.
GRANT ... REQUIRE NONE can now be used to remove SSL information.
AND is now optional between REQUIRE options.
REQUIRE option was not properly saved, which could cause strange
output in SHOW GRANTS.
mysqld --help reports correct values for --datadir
and --bind-address.
mysqld was started.
SHOW VARIABLES on some 64 bit systems
(like Solaris sparc).
--set-variable syntax didn't work for
those options that didn't have a valid variable in my_option struct.
This affected at least default-table-type option.
REPAIR TABLE and
myisamchk --recover to fail on tables with duplicates in a unique
key.
CREATE TABLE table_name
SELECT expression(),...
SELECT * FROM table-list GROUP BY ... and
SELECT DISTINCT * FROM ....
--slow-log when logging an administrator command
(like FLUSH TABLES).
OPTIMIZE of locked and modified table,
reported table corruption.
--skip-,
--enable-). --skip-external-locking didn't work and the bug
may have affected other similar options.
tee option.
SELECT ... FROM many_tables .. ORDER BY key limit #
SHOW OPEN TABLES when a user didn't have access
permissions to one of the opened tables.
configure ... --localstatedir=....
mysql.server script.
mysqladmin shutdown when pid file was modified
while mysqladmin was still waiting for the previous one to
disappear. This could happen during a very quick restart and caused
mysqladmin to hang until shutdown_timeout seconds had
passed.
AUTO_INCREMENT columns to
NULL in LOAD DATA INFILE.
SHOW MASTER STATUS now returns an empty set if binary log is not
enabled.
SHOW SLAVE STATUS now returns an empty set if slave is not initialised.
SELECT DISTINCT ... FROM many_tables ORDER BY
not-used-column.
BIGINTs and quoted strings.
QUOTE() function that performs SQL quoting to produce values
that can be used as data values in queries.
DELAY_KEY_WRITE to an enum to allow one set
DELAY_KEY_WRITE for all tables without taking down the server.
IF(condition,column,NULL) so that it returns
the value of the column type.
safe_mysqld a symlink to mysqld_safe in binary distribution.
user.db
table.
CREATE TABLE ... SELECT function().
mysqld now has the option --temp-pool enabled by default as this
gives better performance with some operating systems.
CHANGE MASTER TO if the slave thread died very quickly.
--code-file option is specified, the server calls
setrlimit() to set the maximum allowed core file size to unlimited,
so core files can be generated.
--count=N (-c) option to mysqladmin, to make the
program do only N iterations. To be used with --sleep (-i).
Useful in scripts.
UPDATE: when updating a table,
do_select() became confused about reading records from a cache.
UPDATE when several fields were referenced
from a single table
REVOKE that caused user resources to be randomly set.
GRANT for the new CREATE TEMPORARY TABLE privilege.
DELETE when tables are re-ordered in the
table initialisation method and ref_lengths are of different sizes.
SELECT DISTINCT with large tables.
DEFAULT with INSERT statement.
myisam_max_sort_file_size and
myisam_max_extra_sort_file_size are now given in bytes, not megabytes.
MyISAM/ISAM files is now turned
off by default. One can turn this on with --external-locking.
(For most users this is never needed).
INSERT ... SET db_name.table_name.colname=''.
DROP DATABASE
SET [GLOBAL | SESSION] syntax to change thread-specific and global
server variables at runtime.
slave_compressed_protocol.
query_cache_startup_type to query_cache_type,
myisam_bulk_insert_tree_size to bulk_insert_buffer_size,
record_buffer to read_buffer_size and
record_rnd_buffer to record_rnd_buffer_size.
--skip-locking to --skip-external-locking.
query_buffer_size.
mysql client
non-functional.
AUTO_INCREMENT support to MERGE tables.
LOG() function to accept an optional arbitrary base
parameter.
See section 6.3.3.2 Mathematical Functions.
LOG2() function (useful for finding out how many bits
a number would require for storage).
LN() natural logarithm function for compatibility with
other databases. It is synonymous with LOG(X).
NULL handling for default values in DESCRIBE
table_name.
truncate() to round up negative values to the nearest integer.
--chroot=path option to execute chroot() immediately
after all options have been parsed.
lower_case_table_names now also affects database names.
XOR operator (logical and bitwise XOR) with ^
as a synonym for bitwise XOR.
IS_FREE_LOCK("lock_name").
Based on code contributed by Hartmut Holzgraefe hartmut@six.de.
mysql_ssl_clear() from C API, as it was not needed.
DECIMAL and NUMERIC types can now read exponential numbers.
SHA1() function to calculate 160 bit hash value as described
in RFC 3174 (Secure Hash Algorithm). This function can be considered a
cryptographically more secure equivalent of MD5().
See section 6.3.6.2 Miscellaneous Functions.
AES_ENCRYPT() and AES_DECRYPT() functions to perform
encryption according to AES standard (Rijndael).
See section 6.3.6.2 Miscellaneous Functions.
--single-transaction option to mysqldump, allowing a
consistent dump of InnoDB tables.
See section 4.8.5 mysqldump, Dumping Table Structure and Data.
innodb_log_group_home_dir in SHOW VARIABLES.
FULLTEXT index is present and no tables are used.
CREATE TEMPORARY TABLES, EXECUTE,
LOCK TABLES, REPLICATION CLIENT, REPLICATION SLAVE,
SHOW DATABASES and SUPER. To use these, you must have
run the mysql_fix_privilege_tables script after upgrading.
TRUNCATE TABLE; This fixes some core
dump/hangup problems when using TRUNCATE TABLE.
DELETE when optimiser uses only indices.
ALTER TABLE table_name RENAME new_table_name is as fast
as RENAME TABLE.
GROUP BY with two or more fields, where at least one
field can contain NULL values.
Turbo Boyer-Moore algorithm to speed up LIKE "%keyword%"
searches.
DROP DATABASE with symlink.
REPAIR ... USE_FRM.
EXPLAIN with LIMIT offset != 0.
"..." in boolean full-text search.
* in boolean full-text search.
+word*s in the query).
MATCH expression that did not use an index appeared twice.
mysqldump.
ft_min_word_len characters.
--without-query-cache.
INET_NTOA() now returns NULL if you give it an argument that
is too large (greater than the value corresponding to 255.255.255.255).
SQL_CALC_FOUND_ROWS to work with UNIONs. It will work only
if the first SELECT has this option and if there is global LIMIT
for the entire statement. For the moment, this requires using parentheses for
individual SELECT queries within the statement.
SQL_CALC_FOUND_ROWS and LIMIT.
CREATE TABLE ...(... VARCHAR(0)).
SIGINT and SIGQUIT problems in `mysql.cc' on Linux
with some glibc versions.
net_store_length() linked in the CONVERT::store() method.
DOUBLE and FLOAT columns now honor the UNSIGNED flag
on storage.
InnoDB now retains foreign key constraints through ALTER TABLE
and CREATE/DROP INDEX.
InnoDB now allows foreign key constraints to be added through the
ALTER TABLE syntax.
InnoDB tables can now be set to automatically grow in size (autoextend).
--ignore-lines=n option to mysqlimport. This has the
same effect as the IGNORE n LINES clause for LOAD DATA.
UNION with last offset being transposed to total result
set.
REPAIR ... USE_FRM added.
DEFAULT_SELECT_LIMIT is always imposed on UNION
result set.
SELECT options can appear only in the first
SELECT.
LIMIT with UNION, where last select is in
the braces.
UNION operations.
SELECT with an empty
HEAP table.
ORDER BY column DESC now sorts NULL values first.
(In other words, NULL values sort first in all cases, whether or
not DESC is specified.). This is changed back in 4.0.10.
WHERE key_name='constant' ORDER BY key_name DESC.
SELECT DISTINCT ... ORDER BY DESC optimisation.
... HAVING 'GROUP_FUNCTION'(xxx) IS [NOT] NULL.
--user=# option for mysqld to be specified
as a numeric user ID.
SQL_CALC_ROWS returned an incorrect value when used
with one table and ORDER BY and with InnoDB tables.
SELECT 0 LIMIT 0 doesn't hang thread.
USE/IGNORE INDEX when using
many keys with the same start column.
BerkeleyDB and InnoDB tables when
we can use an index that covers the whole row.
InnoDB sort-buffer handling to take less memory.
DELETE and InnoDB tables.
TRUNCATE and InnoDB tables that produced the
error Can't execute the given command because you have active locked
tables or an active transaction.
NO_UNSIGNED_SUBTRACTION to the set of flags that may be
specified with the --sql-mode option for mysqld. It disables
unsigned arithmetic rules when it comes to subtraction. (This will make
MySQL 4.0 behave more closely to 3.23 with UNSIGNED columns).
|, <<, ...) is now of
type unsigned integer.
nan values in MyISAM to make it possible to
repair tables with nan in float or double columns.
myisamchk where it didn't correctly update number of
``parts'' in the MyISAM index file.
autoconf 2.52 (from autoconf 2.13).
const tables. This fix also
improves performance a bit when referring to another table from a
const table.
UPDATE statement.
DELETE.
SELECT CONCAT(argument_list) ... GROUP BY 1.
INSERT ... SELECT did a full rollback in case of an error. Fixed
so that we only roll back the last statement in the current transaction.
NULL.
BIT_LENGTH() function.
GROUP BY BINARY column.
NULL keys in HEAP tables.
ORDER BY in queries of type:
SELECT * FROM t WHERE key_part1=1 ORDER BY key_part1 DESC,key_part2 DESC
FLUSH QUERY CACHE.
CAST() and CONVERT() functions. The CAST and
CONVERT functions are nearly identical and mainly useful when you
want to create a column with a specific type in a CREATE ... SELECT
statement. For more information, read section 6.3.5 Cast Functions.
CREATE ... SELECT on DATE and TIME functions now
create columns of the expected type.
Null and Index_type to SHOW INDEX
output.
--no-beep and --prompt options to mysql command-line client.
GRANT ... WITH MAX_QUERIES_PER_HOUR N1
MAX_UPDATES_PER_HOUR N2
MAX_CONNECTIONS_PER_HOUR N3;
See section 4.3.6 Limiting user resources.
mysql_secure_installation to the `scripts/' directory.
system command to mysql.
HANDLER was used with some unsupported table type.
mysqldump now puts ALTER TABLE tbl_name DISABLE KEYS and
ALTER TABLE tbl_name ENABLE KEYS in the sql dump.
mysql_fix_extensions script.
LOAD DATA FROM MASTER on OSF/1.
DES_ENCRYPT() and DES_DECRYPT() functions.
FLUSH DES_KEY_FILE statement.
--des-key-file option to mysqld.
HEX(string) now returns the characters in string converted to
hexadecimal.
GRANT when using lower_case_table_names=1.
SELECT ... IN SHARE MODE to
SELECT ... LOCK IN SHARE MODE (as in MySQL 3.23).
SELECT queries.
MATCH ... AGAINST(... IN BOOLEAN MODE) can now work
without FULLTEXT index.
FULLTEXT indexes.
DELETE ... WHERE ... MATCH ....
MATCH ... AGAINST(... IN BOOLEAN MODE).
Note: you must rebuild your tables with
ALTER TABLE tablename TYPE=MyISAM to be
able to use boolean full-text search.
LOCATE() and INSTR() are now case-sensitive if either
argument is a binary string.
RAND() initialisation so that RAND(N) and
RAND(N+1) are more distinct.
UPDATE ... ORDER BY.
INSERT INTO ... SELECT to stop on errors by default.
DATA DIRECTORY and INDEX DIRECTORY directives on Windows.
MODIFY and CHANGE in ALTER TABLE to accept
the FIRST and AFTER keywords.
ORDER BY on a whole InnoDB table.
--xml option to mysql for producing XML output.
ft_min_word_len, ft_max_word_len, and
ft_max_word_len_for_sort.
libmysqld, the embedded MySQL server
library. Also added example programs (a mysql client and
mysqltest test program) which use libmysqld.
my_thread_init() and my_thread_end()
from `mysql_com.h', and added mysql_thread_init() and
mysql_thread_end() to `mysql.h'.
MyISAM to be able to handle these.
BIGINT constants now work. MIN() and MAX()
now handle signed and unsigned BIGINT numbers correctly.
latin1_de which provides correct German sorting.
STRCMP() now uses the current character set when doing comparisons,
which means that the default comparison behaviour now is case-insensitive.
TRUNCATE TABLE and DELETE FROM tbl_name are now separate
functions. One bonus is that DELETE FROM tbl_name now returns
the number of deleted rows, rather than zero.
DROP DATABASE now executes a DROP TABLE on all tables in
the database, which fixes a problem with InnoDB tables.
UNION.
DELETE operations.
HANDLER interface to MyISAM tables.
INSERT on MERGE tables. Patch from
Benjamin Pflugmann.
WEEK(#,0) to match the calendar in the USA.
COUNT(DISTINCT) is about 30% faster.
IS NULL, ISNULL() and some other internal primitives.
myisam_bulk_insert_tree_size variable.
CHAR/VARCHAR) keys is now much faster.
SELECT DISTINCT * from tbl_name ORDER by key_part1 LIMIT #.
SHOW CREATE TABLE now shows all table attributes.
ORDER BY ... DESC can now use keys.
LOAD DATA FROM MASTER ``automatically'' sets up a slave.
safe_mysqld to mysqld_safe to make this name more
in line with other MySQL scripts/commands.
MyISAM tables. Symlink handling is
now enabled by default for Windows.
SQL_CALC_FOUND_ROWS and FOUND_ROWS(). This makes it
possible to know how many rows a query would have returned
without a LIMIT clause.
SHOW OPEN TABLES.
SELECT expression LIMIT ....
IDENTITY as a synonym for AUTO_INCREMENT (like Sybase).
ORDER BY syntax to UPDATE and DELETE.
SHOW INDEXES is now a synonym for SHOW INDEX.
ALTER TABLE tbl_name DISABLE KEYS and
ALTER TABLE tbl_name ENABLE KEYS commands.
IN as a synonym for FROM in SHOW commands.
FULLTEXT indexes.
REPAIR TABLE, ALTER TABLE, and OPTIMIZE TABLE
for tables with FULLTEXT indexes are now up to 100 times faster.
X'hexadecimal-number'.
FLUSH TABLES WITH READ LOCK.
DATETIME = constant in WHERE optimisation.
--master-data and --no-autocommit options to
mysqldump. (Thanks to Brian Aker for this.)
mysql_explain_log.sh to distribution.
(Thanks to mobile.de).
Please note that since release 4.0 is now production level, only critical fixes are done in the 3.23 release series. You are recommended to upgrade when possible, to take advantage of all speed and feature improvements in 4.0. See section 2.5.2 Upgrading From Version 3.23 to 4.0.
The 3.23 release has several major features that are not present in previous versions. We have added three new table types:
MyISAM
InnoDB
BerkeleyDB or BDB
Note that only MyISAM is available in the standard binary distribution.
The 3.23 release also includes support for database replication between a master and many slaves, full-text indexing, and much more.
All new features are being developed in the 4.x version. Only bug fixes and minor enhancements to existing features will be added to 3.23.
The replication code and BerkeleyDB code is still not as tested and as the rest of the code, so we will probably need to do a couple of future releases of 3.23 with small fixes for this part of the code. As long as you don't use these features, you should be quite safe with MySQL 3.23!
Note that the above doesn't mean that replication or Berkeley DB don't
work. We have done a lot of testing of all code, including replication
and BDB without finding any problems. It only means that not as many
users use this code as the rest of the code and because of this we are
not yet 100% confident in this code.
kill pid-of-mysqld works on Mac OS X.
SHOW TABLE STATUS displayed wrong Row_format for
myisampack'ed tables. (Bug #427)
SHOW VARIABLES LIKE 'innodb_data_file_path' displayed only the
name of the first datafile (bug #468).
UPDATE
rows in a table even if one hade a global UPDATE privilege and a
database SELECT privilege.
SELECT and wildcarded select list,
when user only had partial column SELECT privileges on the table.
WHERE clause with constant
expression like in WHERE 1 AND (a=1 AND b=1).
mysqlbinlog to fail.
innodb_flush_log_at_trx_commit was changed
from 0 to 1 to make InnoDB tables ACID by default. See section 7.5.3 InnoDB Startup Options.
LOAD DATA INFILE IGNORE : when reading
the binary log, mysqlbinlog and the replication code read REPLACE
instead of IGNORE. This could make the slave's table
become different from the master's table. (Bug #218)
MyISAM when a row is inserted into a table with a
large number of NULL columns. Bug was caused by wrong calculation
of the record length, as the space required for storage of NULL
bits was not added to the total record length.
TRUNCATE table_name or
DELETE FROM table_name which could cause an INSERT to
table_name to be written to the binary log before the
TRUNCATE/ DELETE command.
UPDATE of InnoDB tables where one row could be
updated multiple times.
PROCEDURE ANALYSE() to report DATE instead of
NEWDATE.
PROCEDURE ANALYSE(#) to restrict number of values in
enum to # also for string values.
mysqldump no longer silently deletes the binlogs when called with
--master-data or --first-slave;
while this behaviour was convenient for some
users, others may suffer from it. Now one has to explicitely ask for
this deletion with the new --delete-master-logs option.
mysqldump when it was called with
--master-data: the CHANGE MASTER TO commands appended to
the SQL dump had wrong coordinates. (Bug #159)
sort_buffer variable.
GRANT UPDATE on column level.
HAVING with GROUP BY.
WHERE clause. (Bug #142)
.MYI files.
--user
option specified on the command line. (Normally this comes from
`/etc/my.cnf')
BACKUP TABLE to overwrite existing files.
LOCK TABLE and
another thread did a DROP TABLE. In this case one could do
a KILL on one of the threads to resolve the deadlock.
LOAD DATA INFILE was not replicated by slave if
replicate_*_table was set on the slave.
CHAR(0) columns that could cause wrong
results from the query.
SHOW VARIABLES on 64-bit platforms. The bug was
caused by wrong declaration of variable server_id.
SHOW TABLE STATUS now reports that it can
contain NULL values (which is the case for a crashed `.frm' file).
rpl_rotate_logs test to not fail on certain platforms (e.g.
Mac OS X) due to a too long file name (changed slave-master-info.opt
to .slave-mi).
BLOB NOT NULL columns used with IS NULL.
MAX() optimisation in MERGE tables.
RAND() initialization for new connections.
poll() system call, which resulted in timeout the value specified as
it was executed in both select() and poll().
SELECT * FROM table WHERE datetime1 IS NULL OR datetime2 IS NULL.
INTERVAL,
CASE, FIELD, CONCAT_WS, ELT and
MAKE_SET functions.
--lower-case-table-names (default on Windows)
and you had tables or databases with mixed case on disk, then
executing SHOW TABLE STATUS followed with DROP DATABASE
or DROP TABLE could fail with Errcode 13.
NULL in an auto_increment field and also
uses LAST_INSERT_ID().
mysqladmin --relative.
show status reported a strange number for
Open_files and Open_streams.
free'd pointer bug in mysql_change_user()
handling, that enabled a specially hacked version of MySQL client
to crash mysqld. Note, that one needs to login to the server
by using a valid user account to be able to exploit this bug.
--slow-log when logging an administrator command
(like FLUSH TABLES).
GROUP BY when used on BLOB column with NULL values.
NULLs in CASE ... WHEN ....
--chroot (see section D.4.4 Changes in release 3.23.54 (05 Dec 2002))
is reverted. Unfortunately, there is no way to make it to work, without
introducing backward-incompatible changes in `my.cnf'.
Those who need --chroot functionality, should upgrade to MySQL 4.0.
(The fix in the 4.0 branch did not break backward-compatibility).
--lower-case-table-names default on Mac OS X as the default
file system (HFS+) is case insensitive.
NOHUP_NICENESS
testing.
AUTOCOMMIT=0 mode didn't rotate binary log.
scripts/make_binary_distribution that resulted in
a remaining @HOSTNAME@ variable instead of replacing it with the
correct path to the hostname binary.
SHOW PROCESSLIST to core
dump in pthread_mutex_unlock() if a new thread was connecting.
SLAVE STOP if the thread executing the query has locked
tables. This removes a possible deadlock situation.
mysqld
with a specially crafted packet.
free'd pointer) when altering a
temporary table.
libmysqlclient library
that allowed malicious MySQL server to crash the client
application.
mysql_change_user() handling.
All users are strongly recommended to upgrade to the version 3.23.54.
--chroot command-line option of mysqld
from working.
OPTIMIZE TABLE to corrupt the table
under some rare circumstances.
mysqlcheck so it can deal with table names containing dashes.
NULL field with <=> NULL.
IGNORE INDEX and USE INDEX sometimes
to be ignored.
GROUP BY queries that
didn't return any result.
MATCH ... AGAINST () >=0 was treated as if it was
>.
SHOW PROCESSLIST when running with an
active slave (unlikely timing bug).
TEMPORARY MERGE tables now.
--core-file works on Linux (at least on kernel 2.4.18).
BDB and ALTER TABLE.
GROUP BY
... ORDER BY queries. Symptom was that mysqld died in function
send_fields.
BLOB values in internal
temporary tables used for some (unlikely) GROUP BY queries.
WHERE column_name = key_column_name was calculated as true
for NULL values.
LEFT JOIN ... WHERE key_column=NULL.
MyISAM crash when using dynamic-row tables with huge numbers of
packed fields.
automake 1.5 and
libtool 1.4.
SHOW INNODB STATUS was used and skip-innodb
was defined.
LOCK TABLES on Windows when one connects to a
database that contains upper case letters.
--skip-show-databases doesn't reset the --port option.
safe_mysqld for some shells.
FLUSH STATUS doesn't reset delayed_insert_threads.
BINARY cast on a NULL value.
GRANT at the same time a new
user logged in or did a USE DATABASE.
ALTER TABLE and RENAME TABLE when running with
-O lower_case_table_names=1 (typically on Windows) when giving the
table name in uppercase.
-O lower_case_table_names=1 also converts database
names to lower case.
SELECT ... ORDER BY ... LIMIT.
AND/OR to report that they can return NULL. This fixes a
bug in GROUP BY on AND/OR expressions that return
NULL.
OPTIMIZE of locked and modified MyISAM table,
reported table corruption.
BDB-related ALTER TABLE bug with dropping a column
and shutting down immediately thereafter.
configure ... --localstatedir=....
UNSIGNED BIGINT on AIX (again).
BEGIN/COMMIT around transaction in the binary log.
This makes replication honour transactions.
user.db
table.
RND() to make it less predicatable.
GROUP BY on result with expression that created a
BLOB field.
GROUP BY on columns that have NULL values.
To solve this we now create an MyISAM temporary table when doing a
GROUP BY on a possible NULL item.
From MySQL 4.0.5 we can use in memory HEAP tables for this case.
SLAVE START, SLAVE STOP and automatic repair
of MyISAM tables that could cause table cache to be corrupted.
OPTIMIZE TABLE and REPAIR TABLE.
UNIQUE() key where first part could contain NULL values.
MERGE tables and MAX() function.
ALTER TABLE with BDB tables.
LOAD DATA INFILE to binary log with no
active database.
DROP DATABASE on a
database with InnoDB tables.
mysql_info() returns 0 for 'Duplicates' when using
INSERT DELAYED IGNORE.
-DHAVE_BROKEN_REALPATH to the Mac OS X (darwin) compile
options in `configure.in' to fix a failure under high load.
mysqldump XML output.
ENUM values. (This fixed a problem with
SHOW CREATE TABLE.)
CONCAT_WS() that cut the result.
Com_show_master_stat to
Com_show_master_status and Com_show_slave_stat to
Com_show_slave_status.
gethostbyname() to make the client library
thread-safe even if gethostbyname_r doesn't exist.
GRANT.
DROP DATABASE with symlinked directory.
DATETIME and value outside
DATETIME range.
BDB doc files from the source tree, as they're not
needed (MySQL covers BDB in its own documentation).
glibc 2.2 (needed for
make dist).
FLOAT(X+1,X) is not converted to FLOAT(X+2,X).
(This also affected DECIMAL, DOUBLE and REAL types)
IF() is case in-sensitive if the second and
third arguments are case sensitive.
gethostbyname_r.
'+11111' for
DECIMAL(5,0) UNSIGNED columns, we will just drop the sign.
ISNULL(expression_which_cannot_be_null) and
ISNULL(constant_expression).
glibc library that we used with the 3.23.50
Linux-x86 binaries.
<row> tags for mysqldump XML output.
crash-me and gcc 3.0.4.
@@unknown_variable doesn't hang server.
@@VERSION as a synonym for VERSION().
SHOW VARIABLES LIKE 'xxx' is now case-insensitive.
GET_LOCK() on HP-UX with DCE threads.
SIGINT and SIGQUIT problems in mysql.
InnoDB now retains foreign key constraints through ALTER TABLE
and CREATE/DROP INDEX.
InnoDB now allows foreign key constraints to be added through the
ALTER TABLE syntax.
InnoDB tables can now be set to automatically grow in size (autoextend).
gcc 3.0.4, which
should make them a bit faster.
--enable-named-pipe.
WHERE key_column = 'J' or key_column='j'.
--log-bin with LOAD DATA
INFILE without an active database.
RENAME TABLE when used with
lower_case_table_names=1 (default on Windows).
DROP TABLE on a table
that was in use by a thread that also used queries on only temporary tables.
SHOW CREATE TABLE and PRIMARY KEY when using
32 indexes.
SET PASSWORD for the anonymous user.
mysql_options().
--enable-local-infile.
bison.
DATE_FORMAT() returned empty string when used
with GROUP BY.
mysqldump --disable-keys to work.
NULL.
LOAD DATA LOCAL INFILE more secure.
glibc library,
which has serious problems under high load and Red Hat 7.2. The 3.23.49 binary
release doesn't have this problem.
--xml option to mysqldump for producing XML output.
autoconf 2.52 (from autoconf 2.13)
const tables.
InnoDB.
InnoDB variables were always shown in SHOW VARIABLES as
OFF on high-byte-first systems (like SPARC).
InnoDB table and another
thread doing an ALTER TABLE on the same table. Before that,
mysqld could crash with an assertion failure in `row0row.c',
line 474.
InnoDB SQL optimiser to favor index searches more often
over table scans.
InnoDB tables when several large
SELECT queries are run concurrently on a multiprocessor Linux
computer. Large CPU-bound SELECT queries will now also generally
run faster on all platforms.
InnoDB now prints after crash recovery the
latest MySQL binlog name and the offset InnoDB was able to recover
to. This is useful, for example, when resynchronising a master and a
slave database in replication.
InnoDB
tables.
InnoDB tablespace.
InnoDB now prevents a FOREIGN KEY declaration where the
signedness is not the same in the referencing and referenced integer columns.
SHOW CREATE TABLE or SHOW TABLE STATUS could cause
memory corruption and make mysqld crash. Especially at risk was
mysqldump, because it frequently calls SHOW CREATE TABLE.
AUTO_INCREMENT column were
wrapped inside one LOCK TABLES, InnoDB asserted in
`lock0lock.c'.
NULL values in a UNIQUE secondary
index for an InnoDB table. But CHECK TABLE was not relaxed: it
reports the table as corrupt. CHECK TABLE no longer complains in
this situation.
SHOW GRANTS now shows REFERENCES instead of REFERENCE.
SELECT ... WHERE key=@var_name OR key=@var_name2
InnoDB keys to 500 bytes.
InnoDB now supports NULL in keys.
SELECT RELEASE_LOCK().
DO expression,[expression]
slave-skip-errors option.
SHOW STATUS is
now much longer.)
InnoDB tables.
GROUP BY expr DESC works.
t1 LEFT JOIN t2 ON t2.key=constant.
mysql_config now also works with binary (relocated) distributions.
InnoDB and BDB tables will now use index when doing an
ORDER BY on the whole table.
BDB tables.
ANALYZE, REPAIR, and OPTIMIZE TABLE when
the thread is waiting to get a lock on the table.
ANALYZE TABLE.
INSERT DELAYED
which could cause the binary log to have rows that were not yet written
to MyISAM tables.
(UPDATE|DELETE) ...WHERE MATCH bugfix.
MyISAM files.
--core-file now works on Solaris.
InnoDB to complain if it cannot find
free blocks from the buffer cache during recovery.
InnoDB insert buffer B-tree handling that could cause
crashes.
InnoDB lock timeout handling.
ALTER TABLE on a TEMPORARY InnoDB
table.
OPTIMIZE TABLE that reset index cardinality if it
was up to date.
t1 LEFT_JOIN t2 ... WHERE t2.date_column IS NULL when
date_column was declared as NOT NULL.
BDB tables and keys on BLOB columns.
MERGE tables on OS with 32-bit file pointers.
TIME_TO_SEC() when using negative values.
Rows_examined count in slow query log.
AVG() column in HAVING.
DAYOFYEAR(column), will return NULL for 0000-00-00 dates.
SELECT * FROM date_col="2001-01-01" and date_col=time_col)
Can't write, because of unique
constraint with some GROUP BY queries.
sjis character strings used within quoted table
names.
CREATE ... FULLTEXT keys with other
storage engines than MyISAM.
signal() on Windows because this appears to not be
100% reliable.
WHERE col_name=NULL on an indexed column
that had NULL values.
LEFT JOIN ... ON (col_name = constant) WHERE col_name = constant.
% could cause
a core dump.
TCP_NODELAY was not used on some systems. (Speed problem.)
The following changes are for InnoDB tables:
InnoDB variables to SHOW VARIABLES.
InnoDB tables.
DROP DATABASE now works also for InnoDB tables.
InnoDB now supports datafiles and raw disk partitions bigger
than 4 GB on those operating systems that have big files.
InnoDB calculates better table cardinality estimates for the
MySQL optimiser.
latin1 are ordered
according to the MySQL ordering.
Note: if you are using latin1 and have inserted characters whose
code is greater than 127 into an indexed CHAR column, you should
run CHECK TABLE on your table when you upgrade to 3.23.44, and
drop and reimport the table if CHECK TABLE reports an error!
innodb_thread_concurrency, helps in
performance tuning in heavily concurrent environments.
innodb_fast_shutdown, speeds up
server shutdown.
innodb_force_recovery, helps to save
your data in case the disk image of the database becomes corrupt.
innodb_monitor has been improved and a new
innodb_table_monitor added.
AUTO_INCREMENT columns with
multiple-line inserts.
MAX(col) is selected from an empty table, and
col is not the first column in a multi-column index.
INSERT DELAYED and FLUSH TABLES introduced
in 3.23.42.
SELECT with
many tables and multi-column indexes and 'range' type.
EXPLAIN SELECT when using
many tables and ORDER BY.
LOAD DATA FROM MASTER when using table with
CHECKSUM=1.
BDB tables.
BDB tables and UNIQUE columns defined
as NULL.
myisampack when using pre-space filled CHAR
columns.
--safe-user-create.
LOCK TABLES and BDB tables.
REPAIR TABLE on MyISAM tables with row
lengths in the range from 65517 to 65520 bytes.
mysqladmin shutdown when there was
a lot of activity in other threads.
INSERT DELAYED where delay thread could be
hanging on upgrading locks with no apparent reason.
myisampack and BLOB.
MERGE table come from the same
database.
LOAD DATA INFILE and transactional tables.
INSERT DELAYED with wrong column definition.
REPAIR of some particularly broken tables.
InnoDB and AUTO_INCREMENT columns.
InnoDB and RENAME TABLE columns.
InnoDB and BLOB columns. If you have
used BLOB columns larger than 8000 bytes in an InnoDB
table, it is necessary to dump the table with mysqldump, drop it and
restore it from the dump.
InnoDB when one could get the error Can't
execute the given command... even when no transaction was active.
ALTER TABLE). Now --lower_case_names
also works on Unix.
--sql-mode=option[,option[,option]] option to mysqld.
See section 4.1.1 mysqld Command-line Options.
shutdown on Solaris where the
`.pid' file wasn't deleted.
InnoDB now supports < 4 GB rows. The former limit was 8000 bytes.
doublewrite file flush method is used in InnoDB.
It reduces the need for Unix fsync() calls to a fraction and
improves performance on most Unix flavors.
InnoDB Monitor to print a lot of InnoDB state
information, including locks, to the standard output. This is useful in
performance tuning.
InnoDB have been fixed.
record_buffer to record_buffer and
record_rnd_buffer. To make things compatible to previous MySQL
versions, if record_rnd_buffer is not set, then it takes the
value of record_buffer.
ORDER BY where some ORDER BY parts
where wrongly removed.
ALTER TABLE and MERGE tables.
my_thread_init() and my_thread_end() to
`mysql_com.h'
--safe-user-create option to mysqld.
SELECT DISTINCT ... HAVING that caused error message
Can't find record in #...
--low-priority-updates and INSERT statements.
slave_net_timeout for replication.
UPDATE and BDB tables.
BDB tables when using key parts.
GRANT FILE ON database.* ...; previously
we added the DROP privilege for the database.
DELETE FROM tbl_name ... LIMIT 0 and
UPDATE FROM tbl_name ... LIMIT 0, which acted as though the
LIMIT clause was not present (they deleted or updated all selected
rows).
CHECK TABLE now checks if an AUTO_INCREMENT column contains
the value 0.
SIGHUP to mysqld will now only flush the logs,
not reset the replication.
1.0e1 (no sign after e).
--force to myisamchk now also updates states.
--warnings to mysqld. Now mysqld
prints the error Aborted connection only if this option is used.
SHOW CREATE TABLE when you didn't have a
PRIMARY KEY.
innodb_unix_file_flush_method variable to
innodb_flush_method.
BIGINT UNSIGNED to DOUBLE. This caused
a problem when doing comparisons with BIGINT values outside of the
signed range.
BDB tables when querying empty tables.
COUNT(DISTINCT) with LEFT JOIN and
there weren't any matching rows.
GEMINI table
type. GEMINI is not released under an Open Source license.
AUTO_INCREMENT sequence wasn't reset when dropping
and adding an AUTO_INCREMENT column.
CREATE ... SELECT now creates non-unique indexes delayed.
LOCK TABLES tbl_name READ followed by
FLUSH TABLES put an exclusive lock on the table.
REAL @variable values were represented with only 2 digits when
converted to strings.
LOAD TABLE FROM MASTER failed.
myisamchk --fast --force will no longer repair tables
that only had the open count wrong.
-lcma thread library on HP-UX 10.20 so
that MySQL will be more stable on HP-UX.
IF() and number of decimals in the result.
INSERT DELAYED was waiting for
a LOCK TABLE.
InnoDB when tablespace was full.
MERGE tables and big tables (> 4G) when using
ORDER BY.
SELECT from MERGE table
sometimes results in incorrectly ordered rows.
REPLACE() when using the ujis character set.
BDB patches 3.2.9.1 and 3.2.9.2.
--skip-stack-trace option to mysqld.
CREATE TEMPORARY now works with InnoDB tables.
InnoDB now promotes sub keys to whole keys.
CONCURRENT to LOAD DATA.
max_allowed_packet is too low to
read a very long log event from the master.
SELECT DISTINCT ... HAVING.
SHOW CREATE TABLE now returns TEMPORARY for temporary tables.
Rows_examined to slow query log.
WHERE that didn't match any rows.
mysqlcheck.
CHECK,
REPAIR, OPTIMIZE.
InnoDB.
SELECT * FROM tbl_name,tbl_name2 ... ORDER BY key_part1 LIMIT #
will use index on key_part1 instead of filesort.
LOCK TABLE to_table WRITE,...; INSERT INTO to_table... SELECT ...
when to_table was empty.
LOCK TABLE and BDB tables.
MATCH() in HAVING clause.
HEAP tables with LIKE.
--mysql-version option to safe_mysqld
INNOBASE to InnoDB (because the INNOBASE
name was already used). All configure options and mysqld
start options now use innodb instead of innobase. This
means that before upgrading to this version, you have to change any
configuration files where you have used innobase options!
CHAR(255) NULL columns.
master-host is not set, as
long as server-id is set and valid `master.info' is present.
SET SQL_SLAVE_SKIP_COUNTER=1; SLAVE START after a manual sanity
check/correction of data integrity.
REGEXP on 64-bit machines.
UPDATE and DELETE with WHERE unique_key_part IS NULL
didn't update/delete all rows.
INSERT DELAYED for tables that support transactions.
TEXT/BLOB column
with wrong date format.
ALTER TABLE and LOAD DATA INFILE that disabled
key-sorting. These commands should now be faster in most cases.
FLUSH or REPAIR) would not use indexes for the
next query.
ALTER TABLE to InnoDB tables on FreeBSD.
mysqld variables myisam_max_sort_file_size and
myisam_max_extra_sort_file_size.
InnoDB.
tis620 character set to make comparisons
case-independent and to fix a bug in LIKE for this character set.
Note: All tables that uses the tis620 character set must be
fixed with myisamchk -r or REPAIR TABLE !
--skip-safemalloc option to mysqld.
mysqld is run
as root.
FLUSH TABLES and TEMPORARY tables.
(Problem with freeing the key cache and error Can't reopen table....)
InnoDB with other character sets than latin1
and another problem when using many columns.
DISTINCT and summary functions.
SET TRANSACTION ISOLATION LEVEL ...
SELECT ... FOR UPDATE.
UPDATE where keys weren't always used to find the
rows to be updated.
CONCAT_WS() where it returned incorrect results.
CREATE ... SELECT and INSERT ... SELECT to not
allow concurrent inserts as this could make the binary log hard to repeat.
(Concurrent inserts are enabled if you are not using the binary or update log.)
glibc 2.2.
ORDER BY.
CLIENT_TRANSACTIONS.
SHOW VARIABLES when using INNOBASE tables.
SELECT DISTINCT didn't work.
SHOW ANALYZE for small tables.
run-all-tests.
INNOBASE support
to be compiled.
INNOBASE storage engine and the BDB storage engine
to the MySQL source distribution.
GEMINI tables.
INSERT DELAYED that caused threads to hang when
inserting NULL into an AUTO_INCREMENT column.
CHECK TABLE / REPAIR TABLE that could cause
a thread to hang.
REPLACE will not replace a row that conflicts with an
AUTO_INCREMENT generated key.
mysqld now only sets CLIENT_TRANSACTIONS in
mysql->server_capabilities if the server supports a
transaction-safe storage engine.
LOAD DATA INFILE to allow numeric values to be read into
ENUM and SET columns.
ALTER TABLE ... ORDER BY.
max_user_connections variable to mysqld.
max_allowed_packet, not the
arbitrary limit of 4 MB.
= in argument to --set-variable.
Waiting for table.
SHOW CREATE TABLE now displays the UNION() for MERGE
tables.
ALTER TABLE now remembers the old UNION() definition.
BDB storage engine that occurred when using an index
on multi-part key where a key part may be NULL.
MAX() optimisation on sub-key for BDB tables.
BDB
tables and BLOB or TEXT fields when joining many tables.
BDB tables and TEXT columns.
BLOB key where a const row wasn't found.
mysqlbinlog writes the timestamp value for each query.
This ensures that one gets same values for date functions like NOW()
when using mysqlbinlog to pipe the queries to another server.
--skip-gemini, --skip-bdb, and --skip-innodb
options to be specified when invoking mysqld, even if these storage
engines are not compiled in to mysqld.
GROUP BY ... DESC.
SET code, when one ran SET @foo=bar,
where bar is a column reference, an error was not properly generated.
--character-sets-dir option to myisampack.
REPAIR TABLE ... EXTENDED.
GROUP BY on an alias,
where the alias was the same as an existing column name.
SEQUENCE() as an example UDF function.
mysql_install_db to use BINARY for CHAR
columns in the privilege tables.
TRUNCATE tbl_name to TRUNCATE TABLE tbl_name
to use the same syntax as Oracle. Until 4.0 we will also allow
TRUNCATE tbl_name to not crash old code.
MyISAM tables when a BLOB was
first part of a multi-part key.
CASE didn't work with GROUP BY.
--sort-recover option to myisamchk.
myisamchk -S and OPTIMIZE TABLE now work on Windows.
DISTINCT on results from functions that referred
to a group function, like:
SELECT a, DISTINCT SEC_TO_TIME(SUM(a)) FROM tbl_name GROUP BY a, b;
libmysqlclient library.
Fixed bug in handling STOP event after ROTATE event in
replication.
DROP DATABASE.
Table_locks_immediate and Table_locks_waited status
variables.
SET SQL_SLAVE_SKIP_COUNTER=n command to recover from
replication glitches without a full database copy.
max_binlog_size variable; the binary log will be rotated
automatically when the size crosses the limit.
Last_error, Last_errno, and Slave_skip_counter
variables to SHOW SLAVE STATUS.
MASTER_POS_WAIT() function.
SIGILL, and SIGBUS in addition to
SIGSEGV.
mysqltest to take care of the timing issues in the test
suite.
ALTER TABLE can now be used to change the definition for a
MERGE table.
MERGE tables on Windows.
--temp-pool option to mysqld. Using this option
will cause most temporary files created to use a small set of names,
rather than a unique name for each new file. This is to work around a
problem in the Linux kernel dealing with creating a bunch of new files
with different names. With the old behaviour, Linux seems to "leak"
memory, as it's being allocated to the directory entry cache instead of
the disk cache.
BACKUP, RESTORE, CHECK, REPAIR, and
ANALYZE TABLE.
FULL to SHOW COLUMNS. Now we show the
privilege list for the columns only if this option is given.
SHOW LOGS when there weren't any BDB logs.
mysql_list_fields(). This is
to keep this code compatible with SHOW FIELDS.
MERGE tables didn't work on Windows.
SET PASSWORD=... on Windows.
TRIM("foo" from "foo") didn't return an empty string.
--with-version-suffix option to configure.
mysql_close().
RESTORE TABLE when trying to restore from a non-existent
directory.
SET PASSWORD.
MASTER_POS_WAIT().
BDB interface code. During
testing we found and fixed many errors in the interface code.
HAVING on an empty table could produce one result row when
it shouldn't.
HEAP tables on Windows.
SHOW TABLE STATUS didn't show correct average row length for tables
larger than 4G.
CHECK TABLE ... EXTENDED didn't check row links for fixed size tables.
MEDIUM to CHECK TABLE.
DECIMAL() keys on negative numbers.
HOUR() (and some other TIME functions) on a CHAR column
always returned NULL.
setrlimit() on Linux to get
-O --open-files-limit=# to work on Linux.
bdb_version variable to mysqld.
SELECT ... FROM t1 LEFT JOIN t2 ON (t1.a=t2.a) WHERE t1.a=t2.aIn this case the test in the
WHERE clause was wrongly optimised away.
MyISAM when deleting keys with possible NULL
values, but the first key-column was not a prefix-compressed text column.
mysql.server to read the [mysql.server] option file group
rather than the [mysql_server] group.
safe_mysqld and mysql.server to also read the
server option section.
Threads_created status variable to mysqld.
SHOW OPEN TABLES command.
myisamdump works against old mysqld servers.
myisamchk -k# so that it works again.
LOCK TABLES will now automatically start a new transaction.
BDB tables to not use internal subtransactions and reuse
open files to get more speed.
--mysqld=# option to safe_mysqld.
--fields-*-by and
--lines-terminated-by options to mysqldump and
mysqlimport. By Paul DuBois.
--safe-show-database option to mysqld.
have_bdb, have_gemini, have_innobase,
have_raid and have_openssl to SHOW VARIABLES to make it
easy to test for supported extensions.
--open-files-limit option to mysqld.
--open-files option to --open-files-limit in
safe_mysqld.
HEAP tables
that had many keys.
--bdb-no-sync works.
--bdb-recover to --bdb-no-recover as recover should
be on by default.
BDB locks to 10000.
BDB tables.
mysqld_multi.sh to use configure variables. Patch by
Christopher McCrory.
--skip-networking on Debian Linux.
UNOPENED in error messages.
SHOW LOGS queries.
<=> operator.
REPLACE with BDB tables.
LPAD() and RPAD() will shorten the result string if it's longer
than the length argument.
SHOW LOGS command.
BDB logs on shutdown.
PRIMARY keys first, followed by
UNIQUE keys.
UPDATE involving multi-part keys where one
specified all key parts both in the update and the WHERE part. In
this case MySQL could try to update a record that didn't match
the whole WHERE part.
mysqld to report the
hostname as '' in some error messages.
HEAP type tables; the variable
max_heap_table_size wasn't used. Now either MAX_ROWS or
max_heap_table_size can be used to limit the size of a HEAP
type table.
bdb_lock_max variable to bdb_max_lock.
AUTO_INCREMENT on sub-fields for BDB tables.
ANALYZE of BDB tables.
BDB tables, we now store the number of rows; this helps to optimise
queries when we need an approximation of the number of rows.
ROLLBACK when you have updated a non-transactional table
you will get an error as a warning.
--bdb-shared-data option to mysqld.
Slave_open_temp_tables status variable to mysqld
binlog_cache_size and max_binlog_cache_size variables to
mysqld.
DROP TABLE, RENAME TABLE, CREATE INDEX and
DROP INDEX are now transaction endpoints.
DROP DATABASE on a symbolically linked database, both
the link and the original database is deleted.
DROP DATABASE to work on OS/2.
SELECT DISTINCT ... table1 LEFT JOIN
table2 ... when table2 was empty.
--abort-slave-event-count and
--disconnect-slave-event-count options to mysqld for
debugging and testing of replication.
SHOW KEYS now shows whether key is FULLTEXT.
mysqld_multi. See section 4.7.3 mysqld_multi, A Program for Managing Multiple MySQL Servers.
mysql-multi.server.sh. Thanks to
Tim Bunce Tim.Bunce@ig.co.uk for modifying mysql.server to
easily handle hosts running many mysqld processes.
safe_mysqld, mysql.server, and mysql_install_db have
been modified to use mysql_print_defaults instead of various hacks
to read the `my.cnf' files. In addition, the handling of various
paths has been made more consistent with how mysqld handles them
by default.
FULLTEXT indexes in one table.
REPAIR/OPTIMIZE.
Yuri Dario.
FLUSH TABLES tbl_name didn't always flush the index tree
to disk properly.
--bootstrap is now run in a separate thread. This fixes a problem
that caused mysql_install_db to core dump on some Linux machines.
mi_create() to use less stack space.
MATCH() when used
with UNIQUE key.
crash-me and the MySQL benchmarks to also work
with FrontBase.
RESTRICT and CASCADE after DROP TABLE to make
porting easier.
--slow-log.
connect_timeout variable to mysql and mysqladmin.
connect-timeout as an alias for timeout for option files
read by mysql_options().
--pager[=...], --no-pager,
--tee=... and --no-tee to the mysql client. The
new corresponding interactive commands are pager, nopager,
tee and notee. See section 4.8.2 mysql, The Command-line Tool, mysql --help
and the interactive help for more information.
MyISAM table failed.
SELECT, UPDATE and INSERT
statements running. The symptom was that the UPDATE and
INSERT queries were locked for a long time while new SELECT
statements were executed before the updates.
options_files with mysql_options() the
return-found-rows option was ignored.
interactive-timeout in the option file that
is read by mysql_options(). This makes it possible to force
programs that run for a long time (like mysqlhotcopy) to use the
interactive_timeout time instead of the wait_timeout time.
--log-long-format then also queries that
do not use an index are logged, even if the query takes less than
long_query_time seconds.
LEFT JOIN which caused all columns in a reference
table to be NULL.
NATURAL JOIN without keys.
TEXT or BLOB.
DROP of temporary tables wasn't stored in the update/binary log.
SELECT DISTINCT * ... LIMIT # only returned one row.
strstr() for SPARC and cleaned up
the `global.h' header file to avoid a problem with bad aliasing with
the compiler submitted with Red Hat 7.0. (Reported by Trond Eivind Glomsrød)
--skip-networking option now works properly on NT.
ISAM tables when a row with a length
of more than 65K was shortened by a single byte.
MyISAM when running multiple updating processes on
the same table.
FLUSH TABLE tbl_name.
--replicate-ignore-table, --replicate-do-table,
--replicate-wild-ignore-table, and --replicate-wild-do-table
options to mysqld.
IO_CACHE mechanism instead of
FILE to avoid OS problems when there are many files open.
--open-files and --timezone options to safe_mysqld.
CREATE TEMPORARY TABLE ... SELECT ....
CREATE TABLE ... SELECT NULL.
large_file_support,net_read_timeout,
net_write_timeout and query_buffer_size to SHOW VARIABLES.
created_tmp_files and sort_merge_passes
to SHOW STATUS.
FOREIGN KEY definition.
TRUNCATE table_name as a synonym for
DELETE FROM table_name.
BDB key compare function when comparing part keys.
bdb_lock_max variable to mysqld.
mysql_connect() now aborts on Linux if the server doesn't answer in
timeout seconds.
SLAVE START did not work if you started with
--skip-slave-start and had not explicitly run CHANGE MASTER TO.
SHOW MASTER STATUS to be consistent with
SHOW SLAVE STATUS. (It now has no directory in the log name.)
PURGE MASTER LOGS TO.
SHOW MASTER LOGS.
--safemalloc-mem-limit option to mysqld to simulate memory
shortage when compiled with the --with-debug=full option.
SHOW SLAVE STATUS was using an uninitialised mutex if the slave had
not been started yet.
ELT() and MAKE_SET() when the query used
a temporary table.
CHANGE MASTER TO without specifying MASTER_LOG_POS would
set it to 0 instead of 4 and hit the magic number in the master binlog.
ALTER TABLE ... ORDER BY ... syntax added. This will create the
new table with the rows in a specific order.
MyISAM tables sometimes failed
when the datafile was corrupt.
SHOW CREATE when using AUTO_INCREMENT columns.
BDB tables to use new compare function in Berkeley DB 3.2.3.
latin5 (turkish) character set.
FLUSH MASTER and FLUSH SLAVE to RESET MASTER
and RESET SLAVE.
<> to work properly with NULL.
SUBSTRING_INDEX() and REPLACE().
(Patch by Alexander Igonitchev)
CREATE TEMPORARY TABLE IF NOT EXISTS not to produce an error
if the table exists.
PRIMARY KEY in a BDB table, a hidden
PRIMARY KEY will be created.
BDB tables.
LEFT JOIN in some cases preferred a full table scan when there was
no WHERE clause.
--log-slow-queries, don't count the time waiting for a lock.
MyISAM tables if you start mysqld with
--myisam-recover.
TYPE= keyword from CHECK and
REPAIR. Allow CHECK options to be combined. (You can still
use TYPE=, but this usage is deprecated.)
--replicate-rewrite-db option to mysqld.
--skip-slave-start option to mysqld.
INSERT INTO foo(some_key)
values (1),(1)) erroneously terminated the slave thread.
DISTINCT is only used on columns
from some of the tables.
1e1).
SHOW GRANTS didn't always show all column grants.
--default-extra-file=# option to all MySQL clients.
INSERT statements now are initialised properly.
UPDATE didn't always work when used with a range on a timestamp that
was part of the key that was used to find rows.
FULLTEXT index when inserting a NULL column.
mkstemp() instead of tempnam(). Based
on a patch from John Jones.
databasename works as second argument to mysqlhotcopy.
UMASK and UMASK_DIR environment variables
now can be specified in octal by beginning the value with a zero.
RIGHT JOIN. This makes RIGHT a reserved word.
@@IDENTITY as a synonym for LAST_INSERT_ID().
(This is for MSSQL compatibility.)
myisamchk and REPAIR when using FULLTEXT
index.
LOAD DATA INFILE now works with FIFOs.
(Patch by Toni L. Harbaugh-Blackford.)
FLUSH LOGS broke replication if you specified a log name with an
explicit extension as the value of the log-bin option.
MyISAM with packed multi-part keys.
CHECK TABLE on Windows.
FULLTEXT index always used the koi8_ukr
character set.
CHECK TABLE.
MyISAM repair/reindex code didn't use the --tmpdir
option for its temporary files.
BACKUP TABLE and RESTORE TABLE.
CHANGE MASTER TO when the slave did not have
the master to start with.
Time in the processlist for Connect of
the slave thread.
FLUSH MASTER if you didn't specify
a filename argument to --log-bin.
--memlock option to mysqld to lock mysqld
in memory on systems with the mlockall() call (like in Solaris).
HEAP tables didn't use keys properly. (Bug from 3.23.23.)
MERGE tables (keys, mapping, creation,
documentation...). See section 7.2 MERGE Tables.
mysqldump from 3.23 which caused some CHAR columns
not to be quoted.
analyze, check, optimize and repair code.
OPTIMIZE TABLE is now mapped to REPAIR with statistics and
sorting of the index tree. This means that for the moment it only
works on MyISAM tables.
ORDER BY bug with BDB tables.
mysqld couldn't remove the `.pid' file
under Windows.
--log-isam to log MyISAM tables instead of isam
tables.
CHECK TABLE to work on Windows.
pwrite() safe on Windows.
created_tmp_disk_tables variable to mysqld.
TIMESTAMP(X) columns, MySQL now reports columns with X
other than 14 or 8 to be strings.
latin1 as it was before MySQL Version 3.23.23.
Any table that was created or modified with 3.23.22 must be repaired if it has
CHAR columns that may contain characters with ASCII values greater than
128!
BDB tables and reading on a unique (not primary) key.
win1251 character set (it's now only marked deprecated).
REPAIR TABLE or myisamchk before use!
--core-file option to mysqld to get a core file on
Linux if mysqld dies on the SIGSEGV signal.
mysql now starts with option
--no-named-commands (-g) by default. This option can be
disabled with --enable-named-commands (-G). This may cause
incompatibility problems in some cases, for example, in SQL scripts that
use named commands without a semicolon, etc. ! Long format commands
still work from the first line.
DROP TABLE statements at
the same time.
LEFT JOIN on an
empty table.
mysqld with incorrect options.
free() bug in mysqlimport.
MyISAM index handling of
DECIMAL/NUMERIC keys.
MyISAM tables. In some contexts,
usage of MIN(key_part) or MAX(key_part) returned an empty set.
mysqlhotcopy to use the new FLUSH TABLES table_list
syntax. Only tables which are being backed up are flushed now.
--enable-thread-safe-client so
that both non-threaded (-lmysqlclient) and threaded
(-lmysqlclient_r) libraries are built. Users who linked
against a threaded -lmysqlclient will need to link against
-lmysqlclient_r now.
RENAME TABLE command.
NULL values in COUNT(DISTINCT ...).
ALTER TABLE, LOAD DATA INFILE on empty tables and
INSERT ... SELECT ... on empty tables to create non-unique indexes
in a separate batch with sorting. This will make the above calls much
faster when you have many indexes.
ALTER TABLE now logs the first used insert_id correctly.
BLOB column.
DATE_ADD/DATE_SUB where it returned a datetime instead
of a date.
***DEAD*** in SHOW PROCESSLIST.
pthread_rwlock_rdlock code.
HEAP table, all rows
weren't always deleted.
HEAP tables for searches on a part
index.
SELECT on part keys to work with BDB tables.
INSERT INTO bdb_table ... SELECT to work with BDB tables.
CHECK TABLE now updates key statistics for the table.
ANALYZE TABLE will now only update tables that have been changed
since the last ANALYZE. Note that this is a new feature and tables
will not be marked to be analysed until they are updated in any way with
3.23.23 or newer. For older tables, you have to do CHECK TABLE
to update the key distribution.
CHECK, ANALYZE,
REPAIR and SHOW CREATE commands.
CHANGE MASTER TO statement.
FAST, QUICK EXTENDED check types to
CHECK TABLES.
myisamchk so that --fast and
--check-only-changed are also honored with --sort-index and
--analyze.
LOAD TABLE FROM MASTER that did not lock the
table during index re-build.
LOAD DATA INFILE broke replication if the database was excluded
from replication.
SHOW SLAVE STATUS and SHOW MASTER STATUS.
SLAVE STOP now will not return until the slave thread actually exits.
MATCH() function and FULLTEXT index type
(for MyISAM files). This makes FULLTEXT a reserved word.
lex_hash.h is created properly for each MySQL
distribution.
MASTER and COLLECTION are not reserved words.
--slow-query-log didn't contain the whole queries.
BDB tables are rolled back if the
connection is closed unexpectedly.
gcc 2.96 (intel) and gcc 2.9
(IA64) in gen_lex_hash.c.
host= in the
`my.cnf' file.
DATE_ADD()/DATE_SUB()
against a number.
-F, --fast for myisamchk. Added
-C, --check-only-changed option to myisamchk.
ANALYZE tbl_name to update key statistics for tables.
0x... to be regarded as integers by default.
SHOW PROCESSLIST.
auto-rehash on reconnect for the mysql client.
MyISAM, where the index file couldn't
get bigger than 64M.
SHOW MASTER STATUS and SHOW SLAVE STATUS.
mysql_character_set_name() function to the
MySQL C API.
mysql_config script.
< or > with a char column that was only
partly indexed.
mysqladmin to use CREATE DATABASE and DROP
DATABASE statements instead of the old deprecated API calls.
chown warning in safe_mysqld.
ORDER BY that was introduced in 3.23.19.
DELETE FROM tbl_name to do a drop+create of
the table if we are in AUTOCOMMIT mode (needed for BDB tables).
ISAM/MyISAM
index files get full during an INSERT/UPDATE.
myisamchk didn't correctly update row checksum when used with
-ro (this only gave a warning in subsequent runs).
REPAIR TABLE so that it works with tables without indexes.
DROP DATABASE.
LOAD TABLE FROM MASTER is sufficiently bug-free to announce it as
a feature.
MATCH and AGAINST are now reserved words.
DELETE FROM tbl_name removed the `.frm' file.
SHOW CREATE TABLE.
GPL for the server code and utilities and
to LGPL for the client libraries.
MyISAM table
when doing update based on key on a table with many keys and some key changed
values.
ORDER BY can now use REF keys to find subsets of the rows
that need to be sorted.
print_defaults program to my_print_defaults
to avoid name confusion.
NULLIF() to work as required by SQL-99.
net_read_timeout and net_write_timeout as startup
parameters to mysqld.
myisamchk --sort-records
on a table with prefix compressed index.
pack_isam and myisampack to the standard MySQL
distribution.
BEGIN WORK (the same as BEGIN).
ORDER BY on a CONV() expression.
LOAD TABLE FROM MASTER.
FLUSH MASTER and FLUSH SLAVE.
FLUSH TABLES WITH READ LOCK to make a global lock suitable for
making a copy of MySQL datafiles.
CREATE TABLE ... SELECT ... PROCEDURE now works.
GROUP BY on VARCHAR/CHAR columns.
READ and a
WRITE lock.
myisamchk and RAID tables.
FIND_IN_SET() when the first argument was NULL.
LEFT JOIN and ORDER BY where the first
table had only one matching row.
duplicated key problem when doing big GROUP BY operations.
(This bug was probably introduced in 3.23.15.)
INNER JOIN to match SQL-99.
NATURAL JOIN syntax.
BDB interface.
--no-defaults and --defaults-file to
safe_mysqld.sh and mysql_install_db.sh.
USE INDEX works with PRIMARY keys.
BEGIN statement to start a transaction in AUTOCOMMIT mode.
AUTOCOMMIT mode
and if there is a pending transaction. If there is a pending transaction,
the client library will give an error before reconnecting to the server to
let the client know that the server did a rollback.
The protocol is still backward-compatible with old clients.
KILL now works on a thread that is locked on a 'write' to a dead client.
log-slave-updates option to mysqld, to allow
daisy-chaining the slaves.
pthread_t
is not the same as int.
INSERT DELAYED code when doing
ALTER TABLE.
INSERT DELAYED.
SLAVE START and SLAVE STOP statements.
TYPE=QUICK option to CHECK and to REPAIR.
REPAIR TABLE when the table was in use by other threads.
gdb when one does a lot of reconnects. This will also improve
systems where you can't use persistent connections.
UPDATE IGNORE will not abort if an update results in a
DUPLICATE_KEY error.
CREATE TEMPORARY TABLE commands in the update log.
delay_key_write tables and CHECK TABLE.
replicate-do-db and replicate-ignore-db options to
mysqld, to restrict which databases get replicated.
SQL_LOG_BIN option.
mysqld as root, you must now use the --user=root option.
FLUSH TABLES command.
slow_launch_time variable and the Slow_launch_threads
status variable to mysqld. These can be examined with
mysqladmin variables and mysqladmin extended-status.
INET_NTOA() and INET_ATON().
IF() now depends on the second and
third arguments and not only on the second argument.
myisamchk could go into a loop when trying to
repair a crashed table.
INSERT DELAYED to update log if SQL_LOG_UPDATE=0.
REPLACE on HEAP tables.
SHOW VARIABLES output.
DELETE of many rows on a table with
compressed keys where MySQL scanned the index to find the rows.
CHECK on table with deleted keyblocks.
LAST_INSERT_ID() to update
a table with an AUTO_INCREMENT key.
NULLIF() function.
LOAD DATA INFILE on a table with
BLOB/TEXT columns.
MyISAM to be faster when inserting keys in sorted order.
EXPLAIN SELECT ... now also prints out whether MySQL needs to
create a temporary table or use file sorting when resolving the SELECT.
ORDER BY parts where the part is a
constant expression in the WHERE part. Indexes can now be used
even if the ORDER BY doesn't match the index exactly, as long as
all the unused index parts and all the extra ORDER BY
columns are constants in the WHERE clause. See section 5.4.3 How MySQL Uses Indexes.
UPDATE and DELETE on a whole unique key in the WHERE part
are now faster than before.
RAID_CHUNKSIZE to be in 1024-byte increments.
LOAD_FILE(NULL).
mysql_real_escape_string() function to the MySQL C API.
CONCAT() where one of the arguments was a function
that returned a modified argument.
myisamchk, where it updated the header in
the index file when one only checked the table. This confused the
mysqld daemon if it updated the same table at the same time. Now
the status in the index file is only updated if one uses
--update-state. With older myisamchk versions you should
use --read-only when only checking tables, if there is the
slightest chance that the mysqld server is working on the table at the
same time!
DROP TABLE is logged in the update log.
DECIMAL() key field
where the column data contained leading zeros.
myisamchk when the AUTO_INCREMENT column isn't
the first key.
DATETIME in ISO8601 format: 2000-03-12T12:00:00
mysqld binary can now handle many different
character sets (you can choose which when starting mysqld).
REPAIR TABLE.
mysql_thread_safe() function to the MySQL C API.
UMASK_DIR environment variable.
CONNECTION_ID() function to return the client connection thread
ID.
= on BLOB or VARCHAR BINARY keys, where
only a part of the column was indexed, the whole column of the result
row wasn't compared.
sjis character set and ORDER BY.
GROUP BY part.
LOCK TABLE command; this fixed the problem one got when running
the test-ATIS test with --fast or --check-only-changed.
SQL_BUFFER_RESULT option to SELECT.
CHECK TABLE command.
MyISAM in 3.23.12 that didn't get into the source
distribution because of CVS problems.
mysqladmin shutdown will wait for the local server
to close down.
print_defaults program to the `.rpm' files. Removed
mysqlbug from the client `.rpm' file.
MyISAM involving REPLACE ... SELECT ... which could
give a corrupted table.
myisamchk where it incorrectly reset the
AUTO_INCREMENT value.
DISTINCT on HEAP temporary tables to use hashed
keys to quickly find duplicated rows. This mostly concerns queries of
type SELECT DISTINCT ... GROUP BY .... This fixes a problem where
not all duplicates were removed in queries of the above type. In
addition, the new code is MUCH faster.
IF NOT EXISTS clause to CREATE DATABASE.
--all-databases and --databases options to mysqldump
to allow dumping of many databases at the same time.
DECIMAL() index in MyISAM tables.
mysqladmin shutdown on a local connection, mysqladmin
now waits until the PID file is gone before terminating.
COUNT(DISTINCT ...) queries.
myisamchk works properly with RAID tables.
LEFT JOIN and key_field IS NULL.
net_clear() which could give the error Aborted
connection in the MySQL clients.
USE INDEX (key_list) and IGNORE INDEX (key_list) as
parameters in SELECT.
DELETE and RENAME should now work on RAID tables.
ALTER TABLE tbl_name ADD (field_list) syntax.
GRANT/REVOKE ALL PRIVILEGES doesn't affect
GRANT OPTION.
SHOW GRANTS.
UNIQUE INDEX in CREATE statements.
mysqlhotcopy - fast online hot-backup utility for local
MySQL databases. By Tim Bunce.
mysqlaccess. Thanks to Steve Harvey for this.
--i-am-a-dummy and --safe-updates options to mysql.
select_limit and max_join_size variables to mysql.
SQL_MAX_JOIN_SIZE and SQL_SAFE_UPDATES options.
READ LOCAL lock that doesn't lock the table for concurrent
inserts. (This is used by mysqldump.)
LOCK TABLES ... READ doesn't anymore allow concurrent
inserts.
--skip-delay-key-write option to mysqld.
_rowid can now be used as an alias for an integer type unique indexed
column.
SIGPIPE when compiling with --thread-safe-clients
to make things safe for old clients.
LOCK TABLES.
INSERT DELAYED.
date_col BETWEEN const_date AND const_date works.
NULL in a table with
BLOB/TEXT columns.
WHERE K1=1 and K3=2 and (K2=2 and K4=4 or K2=3 and K4=5)
source command to mysql to allow reading of batch files
inside the mysql client. Original patch by Matthew Vanecek.
WITH GRANT OPTION option.
GRANT error when using tables from many
databases in the same query.
SELECT when using many overlapping indexes.
MySQL should now be able to choose keys even better when there
are many keys to choose from.
=). For example, the following type of queries should now
be faster: SELECT * from key_part_1=const and key_part_2 > const2
VARCHAR columns to CHAR columns
didn't change row type from dynamic to fixed.
SELECT FLOOR(POW(2,63)).
mysqld startup option from --delay-key-write to
--delay-key-write-for-all-tables.
read-next-on-key to HEAP tables. This should fix all
problems with HEAP tables when using non-UNIQUE keys.
--log-slow-queries option to mysqld to log all queries
that take a long time to a separate log file with a time indicating how
long the query took.
WHERE key_col=RAND(...).
SELECT ... LEFT JOIN ... key_col IS NULL,
when key_col could contain NULL values.
LOAD DATA INFILE.
NISAM.
ISAM when doing some ORDER BY ... DESC queries.
--delay-key-write didn't enable delayed key writing.
TEXT column which involved only case changes.
INSERT DELAYED doesn't update timestamps that are given.
YEARWEEK() and options x, X, v and
V to DATE_FORMAT().
MAX(indexed_column) and HEAP tables.
BLOB NULL keys and LIKE "prefix%".
MyISAM and fixed-length rows < 5 bytes.
GROUP BY queries.
ENUM field value
was too big.
pthread_mutex_timedwait,
which is used with INSERT DELAYED. See section 2.6.1 Linux Notes (All Linux Versions).
MyISAM with keys > 250 characters.
MyISAM one can now do an INSERT at the same time as other
threads are reading from the table.
max_write_lock_count variable to mysqld to force a
READ lock after a certain number of WRITE locks.
delay_key_write on show variables.
concurrency variable to thread_concurrency.
LOCATE(substr,str), POSITION(substr IN str),
LOCATE(substr,str,pos), INSTR(str,substr),
LEFT(str,len), RIGHT(str,len),
SUBSTRING(str,pos,len), SUBSTRING(str FROM pos FOR len),
MID(str,pos,len), SUBSTRING(str,pos), SUBSTRING(str
FROM pos), SUBSTRING_INDEX(str,delim,count), RTRIM(str),
TRIM([[BOTH | TRAILING] [remstr] FROM] str),
REPLACE(str,from_str,to_str), REVERSE(str),
INSERT(str,pos,len,newstr), LCASE(str), LOWER(str),
UCASE(str) and UPPER(str); patch by Wei He.
FULL to SHOW PROCESSLIST.
--verbose to mysqladmin.
HEAP to MyISAM.
HEAP tables when doing insert + delete + insert + scan the
table.
REPLACE() and LOAD DATA INFILE.
interactive_timeout variable to mysqld.
mysql_data_seek() from ulong to
ulonglong.
-O lower_case_table_names={0|1} option to mysqld to allow
users to force table names to lowercase.
SELECT ... INTO DUMPFILE.
--ansi option to mysqld to make some functions
SQL-99 compatible.
#sql.
` (" in --ansi mode).
snprintf() when printing floats to avoid some buffer
overflows on FreeBSD.
FLOOR() overflow safe on FreeBSD.
--quote-names option to mysqldump.
PRIMARY KEY NOT NULL.
encrypt() to be thread-safe and not reuse buffer.
mysql_odbc_escape_string() function to support big5 characters in
MyODBC.
FLOAT and DOUBLE (without any length modifiers)
no longer are fixed decimal point numbers.
FLOAT(X): Now this is the same as FLOAT if
X <= 24 and a DOUBLE if 24 < X <= 53.
DECIMAL(X) is now an alias for DECIMAL(X,0) and DECIMAL
is now an alias for DECIMAL(10,0). The same goes for NUMERIC.
ROW_FORMAT={default | dynamic | fixed | compressed} to
CREATE_TABLE.
DELETE FROM table_name didn't work on temporary tables.
CHAR_LENGTH() to be multi-byte character safe.
ORD(string).
SELECT DISTINCT ... ORDER BY RAND().
MyISAM
level.
ALTER TABLE didn't work.
AUTO_INCREMENT column in two keys
MyISAM, you now can have an AUTO_INCREMENT column as a key
sub part:
CREATE TABLE foo (a INT NOT NULL AUTO_INCREMENT, b CHAR(5), PRIMARY KEY (b,a))
MyISAM with packed char keys that could be NULL.
AS on field name with CREATE TABLE table_name SELECT ... didn't
work.
NATIONAL and NCHAR when defining character columns.
This is the same as not using BINARY.
NULL columns in a PRIMARY KEY (only in UNIQUE
keys).
LAST_INSERT_ID() if one uses this in ODBC:
WHERE auto_increment_column IS NULL. This seems to fix some problems
with Access.
SET SQL_AUTO_IS_NULL=0|1 now turns on/off the handling of
searching after the last inserted row with WHERE
auto_increment_column IS NULL.
concurrency to mysqld for Solaris.
--relative option to mysqladmin to make
extended-status more useful to monitor changes.
COUNT(DISTINCT ...) on an empty table.
LOAD DATA INFILE and BLOB columns.
~ (negation).
UDF functions.
DATETIME into a TIME column no longer will
try to store 'days' in it.
SUM().)
LIKE "%" on an index that may have NULL values.
REVOKE ALL PRIVILEGES didn't revoke all privileges.
GRANT option for a database, he couldn't grant
privileges to other users.
SHOW GRANTS FOR user (by Sinisa).
date_add syntax: date/datetime + INTERVAL # interval_type.
By Joshua Chamas.
LOAD DATA REPLACE.
REGEXP is now case-insensitive if you use non-binary strings.
MyISAM.
ASC is now the default again for ORDER BY.
LIMIT to UPDATE.
mysql_change_user() function to the MySQL C API.
SHOW VARIABLES.
--[whitespace] comments.
INSERT into tbl_name VALUES (), that is, you may now specify
an empty value list to insert a row in which each column is set to its
default value.
SUBSTRING(text FROM pos) to conform to SQL-99. (Before this
construct returned the rightmost pos characters.)
SUM() with GROUP BY returned 0 on some systems.
SHOW TABLE STATUS.
DELAY_KEY_WRITE option to CREATE TABLE.
AUTO_INCREMENT on any key part.
YEAR(NOW()) and YEAR(CURDATE()).
CASE construct.
COALESCE().
SELECT * FROM table_name WHERE
key_part1 >= const AND (key_part2 = const OR key_part2 = const). The
bug was that some rows could be duplicated in the result.
myisamchk without -a updated the index
distribution incorrectly.
SET SQL_LOW_PRIORITY_UPDATES=1 was causing a parse error.
WHERE clause.
UPDATE tbl_name SET KEY=KEY+1 WHERE KEY > 100
'1999-01-00'.
SELECT ... WHERE key_part1=const1 AND
key_part_2=const2 AND key_part1=const4 AND key_part2=const4; indextype
should be range instead of ref.
egcs 1.1.2 optimiser bug (when using BLOBs) on Linux Alpha.
LOCK TABLES combined with DELETE FROM table.
MyISAM tables now allow keys on NULL and BLOB/TEXT columns.
SELECT ... FROM t1 LEFT JOIN t2 ON ... WHERE t2.not_null_column IS NULL.
ORDER BY and GROUP BY can be done on functions.
ORDER BY RAND().
WHERE key_column = function.
WHERE key_column = col_name even if
the columns are not identically packed.
WHERE col_name IS NULL.
MyISAM tables)
HEAP temporary tables to MyISAM tables
in case of 'table is full' errors.
--init-file=file_name option to mysqld.
COUNT(DISTINCT value, [value, ...]).
CREATE TEMPORARY TABLE now creates a temporary table, in its own
namespace, that is automatically deleted if connection is dropped.
CASE): CASE, THEN, WHEN, ELSE and END.
EXPORT_SET() and MD5().
MyISAM) with a lot of new features.
See section 7.1 MyISAM Tables.
HEAP tables which are extremely fast for
lookups.
LOAD_FILE(filename) to get the contents of a file as a
string value.
<=> which will act as = but will return TRUE
if both arguments are NULL. This is useful for comparing changes
between tables.
EXTRACT(interval FROM datetime) function.
FLOAT(X) are not rounded on storage and may be
in scientific notation (1.0 E+10) when retrieved.
REPLACE is now faster than before.
LIKE character comparison to behave as =;
This means that 'e' LIKE 'é' is now true. (If the line doesn't
display correctly, the latter 'e' is a French 'e' with a dot above.)
SHOW TABLE STATUS returns a lot of information about the tables.
LIKE to the SHOW STATUS command.
Privileges column to SHOW COLUMNS.
Packed and Comment columns to SHOW INDEX.
CREATE TABLE ... COMMENT "xxx").
UNIQUE, as in
CREATE TABLE table_name (col INT not null UNIQUE)
CREATE TABLE table_name SELECT ...
CREATE TABLE IF NOT EXISTS ...
CHAR(0) columns.
DATE_FORMAT() now requires `%' before any format character.
DELAYED is now a reserved word (sorry about that :( ).
analyse, file: `sql_analyse.c'.
This will describe the data in your query. Try the following:
SELECT ... FROM ... WHERE ... PROCEDURE ANALYSE([max elements,[max memory]])This procedure is extremely useful when you want to check the data in your table!
BINARY cast to force a string to be compared in case-sensitive fashion.
--skip-show-database option to mysqld.
UPDATE now also works
with BLOB/TEXT columns.
INNER join syntax. NOTE: This made INNER
a reserved word!
IP/NETMASK syntax.
NOT NULL DATE/DATETIME column with IS
NULL, this is changed to a compare against 0 to satisfy some ODBC
applications. (By shreeve@uci.edu.)
NULL IN (...) now returns NULL instead of 0. This will
ensure that null_column NOT IN (...) doesn't match
NULL values.
TIME columns.
TIME strings to be more strict. Now the
fractional second part is detected (and currently skipped). The
following formats are supported:
DATETIME.
LOW_PRIORITY attribute to LOAD DATA INFILE.
LOAD DATA INFILE.
DECIMAL(x,y) now works according to SQL-99.
LAST_INSERT_ID() is now updated for INSERT INTO ... SELECT.
SELECT DISTINCT is much faster; it uses the new UNIQUE
functionality in MyISAM. One difference compared to MySQL Version 3.22
is that the output of DISTINCT is no longer sorted.
mysql_num_fields() on
a MYSQL object, you must use mysql_field_count() instead.
LIBWRAP; patch by Henning P. Schmiedehausen.
AUTO_INCREMENT for other than numerical columns.
AUTO_INCREMENT will now automatically make the column
NOT NULL.
NULL as the default value for AUTO_INCREMENT columns.
SQL_BIG_RESULT; SQL_SMALL_RESULT is now default.
--enable-large-files and --disable-large-files switches
to configure. See `configure.in' for some systems where this is
automatically turned off because of broken implementations.
readline to 4.0.
CREATE TABLE options: PACK_KEYS and CHECKSUM.
--default-table-type option to mysqld.
The 3.22 version has faster and safer connect code than version 3.21, as well as a lot of new nice enhancements. As there aren't really any major changes, upgrading from 3.21 to 3.22 should be very easy and painless. See section 2.5.4 Upgrading from Version 3.21 to 3.22.
STD().
ISAM library from 3.23.
INSERT DELAYED.
LEFT JOIN/STRAIGHT_JOIN
on a table with only one row.
GROUP BY on TINYBLOB columns; this
caused bugzilla to not show rows in some queries.
LOCK TABLE.
SELECT DISTINCT queries.
mysqlhotcopy, a fast online hot-backup utility for local MySQL
databases. By Tim Bunce.
mysqlaccess. Thanks to Steve Harvey for this.
GROUP functions.
ISAM code when deleting rows on tables with
packed indexes.
SELECT when using many overlapping indexes.
SELECT FLOOR(POW(2,63)).
WITH GRANT OPTION option.
GROUP BY queries.
ENUM field value
was too big.
mysqlshutdown.exe and mysqlwatch.exe to the Windows
distribution.
ORDER BY on a reference key.
INSERT DELAYED doesn't update timestamps that are given.
LEFT JOIN and COUNT() on a column which
was declared NULL + and it had a DEFAULT value.
CONCAT() in a WHERE clause.
AVG() and STD() with NULL values.
BLOB columns.
ROUND() will now work on Windows.
BLOB/TEXT column argument to
REVERSE().
/*! */ with version numbers.
SUBSTRING(text FROM pos) to conform to SQL-99. (Before this
construct returned the rightmost 'pos' characters.)
LOCK TABLES combined with DELETE FROM table
INSERT ... SELECT didn't use BIG_TABLES.
SET SQL_LOW_PRIORITY_UPDATES=# didn't work.
GRANT ... IDENTIFIED BY
SELECT * FROM table_name WHERE key_part1 >= const AND (key_part2 = const
OR key_part2 = const).
ISAM.
DATA is no longer a reserved word.
LOCK TABLES table_name READ; FLUSH TABLES;
isamchk should now work on Windows.
libtool 1.3.2.
configure.
--defaults-file=### to option file handling to force use
of only one specific option file.
CREATE syntax to ignore MySQL Version 3.23 keywords.
INSERT DELAYED on a table locked with
LOCK TABLES.
DROP TABLE on a table that was
locked by another thread.
GRANT/REVOKE commands in the update log.
isamchk to detect a new error condition.
NATURAL LEFT JOIN.
mysql_close() directly after
mysql_init().
delayed_insert_thread counting when you couldn't create a new
delayed_insert thread.
CONCAT() with many arguments.
DELETE FROM TABLE when table was locked by
another thread.
LEFT JOIN involving empty tables.
mysql.db column from CHAR(32) to CHAR(60).
MODIFY and DELAYED are no longer reserved words.
TIME column.
Host '...' is not allowed to connect to this MySQL
server after one had inserted a new MySQL user with a GRANT
command.
TCP_NODELAY also on Linux (should give faster TCP/IP
connections).
STD() for big tables when result should be 0.
INSERT DELAYED had some garbage at end in the update log.
mysql_install_db (from 3.22.17).
BLOB
columns.
shutdown
not all threads died properly.
-O flush_time=# to mysqld. This is mostly
useful on Windows and tells how often MySQL should close all
unused tables and flush all updated tables to disk.
VARCHAR column compared with CHAR column
didn't use keys efficiently.
--log-update and connecting
without a default database.
configure and portability problems.
LEFT JOIN on tables that had circular dependencies caused
mysqld to hang forever.
mysqladmin processlist could kill the server if a new user logged in.
DELETE FROM tbl_name WHERE key_column=col_name didn't find any matching
rows. Fixed.
DATE_ADD(column, ...) didn't work.
INSERT DELAYED could deadlock with status 'upgrading lock'
ENCRYPT() to take longer salt strings than 2 characters.
longlong2str is now much faster than before. For Intel x86
platforms, this function is written in optimised assembler.
MODIFY keyword to ALTER TABLE.
GRANT used with IDENTIFIED BY didn't take effect until privileges
were flushed.
SHOW STATUS.
ORDER BY with 'only index' optimisation when there
were multiple key definitions for a used column.
DATE and DATETIME columns are now up to 5 times faster than
before.
INSERT DELAYED can be used to let the client do other things while the
server inserts rows into a table.
LEFT JOIN USING (col1,col2) didn't work if one used it with tables
from 2 different databases.
LOAD DATA LOCAL INFILE didn't work in the Unix version because of
a missing file.
VARCHAR/BLOB on very short rows (< 4 bytes);
error 127 could occur when deleting rows.
BLOB/TEXT through formulas didn't work for short (< 256 char)
strings.
GRANT on a new host, mysqld could die on the first
connect from this host.
ORDER BY on column name that was the same
name as an alias.
BENCHMARK(loop_count,expression) function to time expressions.
mysqld to make it easier to start from shell
scripts.
TIMESTAMP column to NULL didn't record the timestamp
value in the update log.
INSERT INTO TABLE ... SELECT ... GROUP BY.
localtime_r() on Windows so that it will no lonher crash
if your date is > 2039, but instead will return a time of all zero.
^Z (ASCII 26) to \Z as ^Z doesn't
work with pipes on Windows.
mysql_fix_privileges adds a new column to the mysql.func to
support aggregate UDF functions in future MySQL releases.
NOW(), CURDATE() or CURTIME() directly in a
column didn't work.
SELECT COUNT(*) ... LEFT JOIN ... didn't work with no WHERE part.
pthread_cond() on the Windows version.
get_lock() now correctly times out on Windows!
DATE_ADD() and DATE_SUB() in a
WHERE clause.
GRANT ... TO user
IDENTIFIED BY 'password' syntax.
GRANT checking with SELECT on many tables.
mysql_fix_privilege_tables to the RPM
distribution. This is not run by default because it relies on the client
package.
SQL_SMALL_RESULT to SELECT to force use of
fast temporary tables when you know that the result set will be small.
DATE_ADD/DATE_SUB() doesn't have enough days.
GRANT compares columns in case-insensitive fashion.
ALTER TABLE dump core in
some contexts.
user@hostname can now include `.' and `-'
without quotes in the context of the GRANT, REVOKE and
SET PASSWORD FOR ... statements.
isamchk for tables which need big temporary files.
mysql_fix_privilege_tables script
when you upgrade to this version! This is needed because of the new
GRANT system. If you don't do this, you will get Access
denied when you try to use ALTER TABLE, CREATE INDEX, or
DROP INDEX.
GRANT to allow/deny users table and column access.
USER() to return a value in user@host format.
Formerly it returned only user.
PASSWORD for another user.
FLUSH STATUS that resets most status variables to zero.
aborted_threads, aborted_connects.
connection_timeout.
SET SQL_WARNINGS=1 to get a warning count also for simple
inserts.
SIGTERM instead of SIGQUIT with
shutdown to work better on FreeBSD.
\G (print vertically) to mysql.
SELECT HIGH_PRIORITY ... killed mysqld.
IS NULL on a AUTO_INCREMENT column in a LEFT JOIN didn't
work as expected.
MAKE_SET().
mysql_install_db no longer starts the MySQL server! You
should start mysqld with safe_mysqld after installing it! The
MySQL RPM will, however, start the server as before.
--bootstrap option to mysqld and recoded
mysql_install_db to use it. This will make it easier to install
MySQL with RPMs.
+, - (sign and minus), *, /, %,
ABS() and MOD() to be BIGINT aware (64-bit safe).
ALTER TABLE that caused mysqld to crash.
INSERT.)
INSERT INTO tbl_name SET col_name=value, col_name=value, ...
MYSQL_INIT_COMMAND to mysql_options() to make
a query on connect or reconnect.
MYSQL_READ_DEFAULT_FILE and
MYSQL_READ_DEFAULT_GROUP to mysql_options() to read the
following parameters from the MySQL option files: port,
socket, compress, password, pipe, timeout,
user, init-command, host and database.
maybe_null to the UDF structure.
IGNORE to INSERT statements with many rows.
koi8 character sets; users of
koi8 must run isamchk -rq on each table that has an
index on a CHAR or VARCHAR column.
mysql_setpermission, by Luuk de Boer. It allows easy
creation of new users with permissions for specific databases.
LOAD DATA INFILE).
SHOW STATUS and changed format of output to
be like SHOW VARIABLES.
extended-status command to mysqladmin which will show the
new status variables.
SET SQL_LOG_UPDATE=0 caused a lockup of the server.
FLUSH [ TABLES | HOSTS | LOGS | PRIVILEGES ] [, ...]
KILL thread_id.
ALTER TABLE from a INT
to a short CHAR() column.
SELECT HIGH_PRIORITY; this will get a lock for the
SELECT even if there is a thread waiting for another
SELECT to get a WRITE LOCK.
wild_compare() to string class to be able to use LIKE on
BLOB/TEXT columns with \0.
ESCAPE option to LIKE.
mysqladmin debug.
mysqld on Windows with the --flush option.
This will flush all tables to disk after each update. This makes things
much safer on the Windows platforms but also much slower.
my_strcoll()! The patch should always be safe to install (for any
system), but as this patch changes ISAM internals it's not yet in the
default distribution.
DATE_ADD() and DATE_SUB() didn't work with group functions.
mysql will now also try to reconnect on USE DATABASE commands.
ORDER BY and LEFT JOIN and const tables.
ORDER BY if the first ORDER BY column
was a key and the rest of the ORDER BY columns wasn't part of the key.
OPTIMIZE TABLE.
DROP TABLE and mysqladmin shutdown on Windows
(a fatal bug from 3.22.6).
TIME columns and negative strings.
LIMIT clause for the DELETE statement.
/*! ... */ syntax to hide MySQL-specific
keywords when you write portable code. MySQL will parse the code
inside the comments as if the surrounding /*! and */ comment
characters didn't exist.
OPTIMIZE TABLE tbl_name can now be used to reclaim disk space
after many deletes. Currently, this uses ALTER TABLE to
regenerate the table, but in the future it will use an integrated
isamchk for more speed.
libtool to get the configure more portable.
UPDATE and DELETE operations when using
DATETIME or DATE keys.
mysqladmin proc to display information about your own
threads. Only users with the PROCESS privilege can get
information about all threads.
(In 4.0.2 one needs the SUPER privilege for this.)
YYMMDD, YYYYMMDD,
YYMMDDHHMMSS for numbers when using DATETIME and
TIMESTAMP types. (Formerly these formats only worked with strings.)
CLIENT_IGNORE_SPACE to allow use of spaces
after function names and before `(' (Powerbuilder requires this).
This will make all function names reserved words.
--log-long-format option to mysqld to enable timestamps
and INSERT_IDs in the update log.
--where option to mysqldump (patch by Jim Faucette).
mysqldump.
LOAD DATA INFILE statement, you can now use the new LOCAL
keyword to read the file from the client. mysqlimport will
automatically use LOCAL when importing with the TCP/IP protocol.
DROP TABLE, ALTER TABLE, DELETE FROM
TABLE and mysqladmin flush-tables under heavy usage.
Changed locking code to get better handling of locks of different types.
DBI to 1.00 and DBD to 1.2.0.
mysqld. (To avoid errors if you accidentally
try to use an old error message file.)
affected_rows(),
insert_id(), ...) are now of type BIGINT to allow 64-bit values
to be used.
This required a minor change in the MySQL protocol which should affect
only old clients when using tables with AUTO_INCREMENT values > 16M.
mysql_fetch_lengths() has changed from uint *
to ulong *. This may give a warning for old clients but should work
on most machines.
mysys and dbug libraries to allocate all thread variables
in one struct. This makes it easier to make a threaded `libmysql.dll'
library.
gethostname() (instead of uname()) when
constructing `.pid' file names.
COUNT(), STD() and AVG() are extended to handle more than
4G rows.
-838:59:59 <= x <=
838:59:59 in a TIME column.
TIME column to too short a value, MySQL now
assumes the value is given as: [[[D ]HH:]MM:]SS instead of
HH[:MM[:SS]].
TIME_TO_SEC() and SEC_TO_TIME() can now handle negative times
and hours up to 32767.
SET SQL_LOG_UPDATE={0|1} to allow users with
the PROCESS privilege to bypass the update log.
(Modified patch from Sergey A Mukhin violet@rosnet.net.)
LPAD().
BLOB reading from
pipes safer.
-O max_connect_errors=# option to mysqld.
Connect errors are now reset for each correct connection.
max_allowed_packet to 1M in
mysqld.
--low-priority-updates option to mysqld, to give
table-modifying operations (INSERT, REPLACE, UPDATE,
DELETE) lower priority than retrievals. You can now use
{INSERT | REPLACE | UPDATE | DELETE} LOW_PRIORITY ... You can
also use SET SQL_LOW_PRIORITY_UPDATES={0|1} to change
the priority for one thread. One side effect is that LOW_PRIORITY
is now a reserved word. :(
INSERT INTO table ... VALUES(...),(...),(...),
to allow inserting multiple rows with a single statement.
INSERT INTO tbl_name is now also cached when used with LOCK TABLES.
(Previously only INSERT ... SELECT and LOAD DATA INFILE were
cached.)
GROUP BY functions with HAVING:
mysql> SELECT col FROM table GROUP BY col HAVING COUNT(*)>0;
mysqld will now ignore trailing `;' characters in queries. This
is to make it easier to migrate from some other SQL servers that require the
trailing `;'.
SELECT INTO OUTFILE.
GREATEST() and LEAST() functions. You must now use
these instead of the MAX() and MIN() functions to get the
largest/smallest value from a list of values. These can now handle REAL,
BIGINT and string (CHAR or VARCHAR) values.
DAYOFWEEK() had offset 0 for Sunday. Changed the offset to 1.
GROUP BY columns and fields when
there is no GROUP BY specification.
--vertical option to mysql, for printing results in
vertical mode.
--tmpdir option to mysqld, for specifying the location
of the temporary file directory.
SELECT ... FROM table WHERE auto_increment_column IS NULLto:
SELECT ... FROM table WHERE auto_increment_column == LAST_INSERT_ID()This allows some ODBC programs (Delphi, Access) to retrieve the newly inserted row to fetch the
AUTO_INCREMENT id.
DROP TABLE now waits for all users to free a table before deleting it.
BIN(), OCT(), HEX() and CONV() for
converting between different number bases.
SUBSTRING() with 2 arguments.
ORDER BY and
GROUP BY.
mysqld now automatically disables system locking on Linux and Windows,
and for systems that use MIT-pthreads. You can force the use of locking
with the --enable-external-locking option.
--console option to mysqld, to force a console window
(for error messages) when using Windows.
DATE_ADD() and DATE_SUB() functions.
mysql_ping() to the client library.
--compress option to all MySQL clients.
byte to char in `mysql.h' and `mysql_com.h'.
<<, >>, RPAD() and LPAD().
ORDER BY to work when no records are found
when using fields that are not in GROUP BY (MySQL extension).
--chroot option to mysqld, to start mysqld in
a chroot environment (by Nikki Chumakov nikkic@cityline.ru).
--one-thread option to mysqld, for debugging with
LinuxThreads (or glibc). (This replaces the -T32 flag)
DROP TABLE IF EXISTS to prevent an error from occurring if the
table doesn't exist.
IF and EXISTS are now reserved words (they would have to
be sooner or later).
mysqldump.
mysql_ping().
mysql_init() and mysql_options().
You now MUST call mysql_init() before you call
mysql_real_connect().
You don't have to call mysql_init() if you only use
mysql_connect().
mysql_options(...,MYSQL_OPT_CONNECT_TIMEOUT,...) so you can set a
timeout for connecting to a server.
--timeout option to mysqladmin, as a test of
mysql_options().
AFTER column and FIRST options to
ALTER TABLE ... ADD columns.
This makes it possible to add a new column at some specific location
within a row in an existing table.
WEEK() now takes an optional argument to allow handling of weeks when
the week starts on Monday (some European countries). By default,
WEEK() assumes the week starts on Sunday.
TIME columns weren't stored properly (bug in MySQL Version 3.22.0).
UPDATE now returns information about how many rows were
matched and updated, and how many ``warnings'' occurred when doing the update.
FORMAT(-100,2).
ENUM and SET columns were compared in binary (case-sensitive)
fashion; changed to be case-insensitive.
mysql_real_connect() call is changed to:
mysql_real_connect(MYSQL *mysql, const char *host, const char *user,
const char *passwd, const char *db, uint port,
const char *unix_socket, uint client_flag)
accept() thread. This fixes permanently the telnet bug
that was a topic on the mail list some time ago.
mysqld now has a local hostname
resolver cache so connections should actually be faster than before,
even with this feature.
tbl_name@db_name or db_name.tbl_name. This makes it possible to
give a user read access to some tables and write access to others simply by
keeping them in different databases!
--user option to mysqld, to allow it to run
as another Unix user (if it is started as the Unix root user).
mysqladmin password 'new_password'. This uses encrypted passwords
that are not logged in the normal MySQL log!
SELECT code to handle some very specific queries
involving group functions (like COUNT(*)) without a GROUP BY but
with HAVING. The following now works:
mysql> SELECT COUNT(*) as C FROM table HAVING C > 1;
malloc().
-T32 option to mysqld, for running all queries under the
main thread. This makes it possible to debug mysqld under Linux with
gdb!
not_null_column IS NULL (needed for some Access
queries).
STRAIGHT_JOIN to be used between two tables to force the optimiser
to join them in a specific order.
VARCHAR rather than CHAR and
the column type is now VARCHAR for fields saved as VARCHAR.
This should make the MyODBC driver better, but may break some old
MySQL clients that don't handle FIELD_TYPE_VARCHAR the same
way as FIELD_TYPE_CHAR.
CREATE INDEX and DROP INDEX are now implemented through
ALTER TABLE.
CREATE TABLE is still the recommended (fast) way to create indexes.
--set-variable option wait_timeout to mysqld.
mysqladmin processlist to show how long a query
has taken or how long a thread has slept.
show variables and some new to
show status.
YEAR. YEAR is stored in 1 byte with allowable
values of 0, and 1901 to 2155.
DATE type that is stored in 3 bytes rather than 4 bytes.
All new tables are created with the new date type if you don't use the
--old-protocol option to mysqld.
Error from table handler: # on some operating systems.
--enable-assembler option to configure, for x86 machines
(tested on Linux + gcc). This will enable assembler functions for the
most important string functions for more speed!
Version 3.21 is quite old now, and should be avoided if possible. This information is kept here for historical purposes only.
SIGHUP to mysqld;
mysqld core dumped when starting from boot on some systems.
DELETE FROM tbl_name without a WHERE condition is now done the
long way when you use LOCK TABLES or if the table is in use, to
avoid race conditions.
INSERT INTO TABLE (timestamp_column) VALUES (NULL); didn't set timestamp.
mysqladmin
refresh often. This could in some very rare cases corrupt the header of the
index file and cause error 126 or 138.
refresh() when running with the
--skip-external-locking option. There was a ``very small'' time gap after
a mysqladmin refresh when a table could be corrupted if one
thread updated a table while another thread did mysqladmin
refresh and another thread started a new update ont the same table
before the first thread had finished. A refresh (or
--flush-tables) will now not return until all used tables are
closed!
SELECT DISTINCT with a WHERE clause that didn't match any rows
returned a row in some contexts (bug only in 3.21.31).
GROUP BY + ORDER BY returned one empty row when no rows where
found.
Use_count: Wrong count for ... in the error log file.
TINYINT type on Irix.
LEFT("constant_string",function).
FIND_IN_SET().
LEFT JOIN core dumped if the second table is used with a constant
WHERE/ON expression that uniquely identifies one record.
DATE_FORMAT() and incorrect dates.
DATE_FORMAT() now ignores '%' to make it possible to extend
it more easily in the future.
mysql now returns an exit code > 0 if the query returned an error.
mysql client.
By Tommy Larsen tommy@mix.hive.no.
safe_mysqld to redirect startup messages to
'hostname'.err instead
of 'hostname'.log to reclaim file space on mysqladmin refresh.
ENUM always had the first entry as default value.
ALTER TABLE wrote two entries to the update log.
sql_acc() now closes the mysql grant tables after a reload to
save table space and memory.
LOAD DATA to use less memory with tables and BLOB
columns.
SELECT problem with LEFT() when using the czech
character set.
isamchk; it couldn't repair a packed table in a very
unusual case.
SELECT statements with & or | (bit functions) failed on
columns with NULL values.
LOCK TABLES + DELETE from tbl_name never removed locks properly.
OR function.
umask() and creating new databases.
SELECT ... INTO OUTFILE ...
MIN(integer) or MAX(integer) in
GROUP BY.
WEEK("XXXX-xx-01").
Error from table handler: # on some operating systems.
GET_LOCK(string,timeout),
RELEASE_LOCK(string).
Opened_tables to show status.
mysqld through telnet + TCP/IP.
WHERE key_part_1 >= something AND key_part_2 <= something_else.
configure for detection of FreeBSD 3.0 9803xx and above
WHERE with string_col_key = constant_string didn't always
find all rows if the column had many values differing only with
characters of the same sort value (like e and e with an accent).
umask() to make log files non-readable for normal users.
--old-protocol option to mysqld.
SELECT which matched all key fields returned the values in the
case of the matched values, not of the found values. (Minor problem.)
FROM_DAYS(0) now returns "0000-00-00".
DATE_FORMAT(), PM and AM were swapped for hours 00 and 12.
BLOB/TEXT in GROUP BY with many
tables.
ENUM field that is not declared NOT NULL has NULL as
the default value.
(Previously, the default value was the first enumeration value.)
INDEX (Organisation,Surname(35),Initials(35)).
SELECT ... FROM many_tables much faster.
accept() to possibly fix some problems on some
Linux machines.
typedef 'string' to typedef 'my_string' for better
portability.
isamchk. Try isamchk --help.
filesort() didn't work.
Affects DISTINCT, ORDER BY and GROUP BY on 64-bit
processors.
SELECT on the
table.
OR operators on key parts
inside each other.
MIN() and MAX() to work properly with strings and
HAVING.
0664 to 0660.
LEFT JOIN and constant expressions in the ON
part.
configure now works better on OSF/1 (tested on 4.0D).
LIKE optimisation with international character
support.
DBI to 0.93.
TIME, DATE, TIMESTAMP, TEXT, BIT,
ENUM, NO, ACTION, CHECK, YEAR,
MONTH, DAY, HOUR, MINUTE, SECOND,
STATUS, VARIABLES.
TIMESTAMP to NULL in LOAD DATA INFILE ... didn't
set the current time for the TIMESTAMP.
BETWEEN to recognise binary strings. Now BETWEEN is
case-sensitive.
--skip-thread-priority option to mysqld, for systems
where mysqld's thread scheduling doesn't work properly (BSDI 3.1).
DAYNAME() and MONTHNAME().
TIME_FORMAT(). This works like DATE_FORMAT(),
but takes a time string ('HH:MM:DD') as argument.
ORs of key parts
inside ANDs.
variables command to mysqladmin.
ALTER TABLE to work with Windows (Windows can't rename
open files). Also fixed a couple of small bugs in the Windows version.
crash-me and the benchmarks on
the following platforms: SunOS 5.6 sun4u, SunOS 5.5.1 sun4u, SunOS 4.14 sun4c,
SunOS 5.6 i86pc, Irix 6.3 mips5k, HP-UX 10.20 hppa, AIX 4.2.1 ppc,
OSF/1 V4.0 alpha, FreeBSD 2.2.2 i86pc and BSDI 3.1 i386.
COUNT(*) problems when the WHERE clause didn't match any
records. (Bug from 3.21.17.)
NULL = NULL is true. Now you must use IS NULL
or IS NOT NULL to test whether a value is NULL.
(This is according to SQL-99 but may break
old applications that are ported from mSQL.)
You can get the old behaviour by compiling with -DmSQL_COMPLIANT.
LEFT OUTER JOIN clauses.
ORDER BY on string formula with possible NULL values.
<= on sub index.
DAYOFYEAR(), DAYOFMONTH(), MONTH(),
YEAR(), WEEK(), QUARTER(), HOUR(), MINUTE(),
SECOND() and FIND_IN_SET().
SHOW VARIABLES command.
mysql> SELECT 'first ' 'second'; -> 'first second'
mysqlaccess to 2.02.
LIKE.
WHERE data_field = date_field2 AND date_field2 = constant.
SHOW STATUS command.
mysqladmin stat to return the right number of queries.
AUTO_INCREMENT attribute or is a TIMESTAMP. This is needed for
the new Java driver.
configure bugs and increased maximum table size
from 2G to 4G.
DBD to 1.1823. This version implements mysql_use_result
in DBD-Mysql.
REVERSE() (by Zeev Suraski).
DBI to 0.91.
LEFT OUTER JOIN.
CROSS JOIN syntax. CROSS is now a reserved word.
yacc/bison stack allocation to be even safer and to allow
MySQL to handle even bigger expressions.
ORDER BY was slow when used with key ranges.
--with-unix-socket-path to avoid
confusion.
LEFT OUTER JOIN.
LEFT, NATURAL,
USING.
MYSQL_HOST as the default host if it's defined.
SELECT col_name, SUM(expr) now returns NULL for col_name
when there are matching rows.
BLOBs with ASCII
characters over 127.
mysqld
restart if one thread was reading data that another thread modified.
LIMIT offset,count didn't work in INSERT ... SELECT.
POWER(), SPACE(),
COT(), DEGREES(), RADIANS(), ROUND(2 arg)
and TRUNCATE().
LOCATE() parameters were
swapped according to ODBC standard. Fixed.
TIME_TO_SEC().
NOT NULL fields.
UPDATE SET ... statements.
BLOB and TEXT, to
be compatible with mysqldump.
mysqlperl is now from
Msql-Mysql-modules. This means that connect() now takes
host, database, user, password arguments! The old
version took host, database, password, user.
DATE '1997-01-01', TIME '12:10:10' and
TIMESTAMP '1997-01-01 12:10:10' formats required by SQL-99.
Warning: Incompatible change! This has the unfortunate
side-effect that you no longer can have columns named DATE, TIME
or TIMESTAMP. :( Old columns can still be accessed through
tablename.columnname!)
make programs trying to rebuild it.
readline library upgraded to version 2.1.
DBI/DBD is now included in the distribution. DBI
is now the recommended way to connect to MySQL from Perl.
DBD, with test results from
mSQL 2.0.3, MySQL, PostgreSQL 6.2.1 and Solid server 2.2.
crash-me is now included with the benchmarks; this is a Perl program
designed to find as many limits as possible in a SQL server. Tested with
mSQL, PostgreSQL, Solid and MySQL.
mysql command-line tool, by Zeev
Suraski and Andi Gutmans.
REPLACE that works like INSERT but
replaces conflicting records with the new record. REPLACE INTO
TABLE ... SELECT ... works also.
CREATE DATABASE db_name and DROP
DATABASE db_name.
RENAME option to ALTER TABLE: ALTER TABLE name
RENAME TO new_name.
make_binary_distribution now includes `libgcc.a' in
`libmysqlclient.a'. This should make linking work for people who don't
have gcc.
net_write() to my_net_write() because of a name
conflict with Sybase.
DAYOFWEEK() compatible with ODBC.
bison memory overrun checking to make MySQL
safer with weird queries.
configure problems on some platforms.
DATE_FORMAT().
NOT IN.
{fn now() }
DATE and TIME values with NULL.
FLOAT. Previously, the
values were converted to INTs before sorting.
key_column=constant.
DOUBLE values sorted on integer results instead.
mysql no longer requires a database argument.
HAVING should be. According to the SQL standards, it should
be after GROUP BY but before ORDER BY. MySQL Version 3.20
incorrectly had it last.
USE DATABASE to start using another database.
mysqld doesn't crash even if you haven't done a
ulimit -n 256 before starting mysqld.
errno.
This makes Linux systems much safer!
SELECT.
LIKE on number key.
--table option to mysql to print in table format.
Moved time and row information after query result.
Added automatic reconnect of lost connections.
!= as a synonym for <>.
VERSION() to make easier logs.
ftruncate() call in MIT-pthreads. This made isamchk
destroy the `.ISM' files on (Free)BSD 2.x systems.
__P_ patch in MIT-pthreads.
NULL
if the returned string should be longer than max_allowed_packet bytes.
INTERVAL type to ENUM, because
INTERVAL is used in SQL-99.
JOIN + GROUP + INTO OUTFILE,
the result wasn't grouped.
LIKE with '_' as last character didn't work. Fixed.
TRIM() function.
CURTIME().
ENCRYPT() function by Zeev Suraski.
FOREIGN KEY syntax skipping. New reserved words:
MATCH, FULL, PARTIAL.
mysqld now allows IP number and hostname for the --bind-address
option.
SET CHARACTER SET cp1251_koi8 to enable conversions of
data to and from the cp1251_koi8 character set.
CREATE COLUMN syntax of NOT NULL columns to be after
the DEFAULT value, as specified in the SQL-99 standard. This will
make mysqldump with NOT NULL and default values incompatible with
MySQL Version 3.20.
ALTER TABLE tbl_name ALTER COLUMN col_name SET DEFAULT
NULL.
CHAR and BIT as synonyms for CHAR(1).
SELECT privilege.
INSERT ... SELECT ... GROUP BY didn't work in some cases. An
Invalid use of group function error occurred.
LIMIT, SELECT now always uses keys instead of record
scan. This will give better performance on SELECT and a WHERE
that matches many rows.
BIT_OR() and BIT_AND().
CHECK and REFERENCES.
CHECK is now a reserved word.
ALL option to GRANT for better compatibility. (GRANT
is still a dummy function.)
ORDER BY and GROUP BY with NULL columns.
LAST_INSERT_ID() SQL function to retrieve last
AUTO_INCREMENT
value. This is intended for clients to ODBC that can't use the
mysql_insert_id() API function, but can be used by any client.
--flush-logs option to mysqladmin.
STATUS to mysql.
ORDER BY/GROUP BY because of bug in gcc.
INSERT ... SELECT ... GROUP BY.
mysqlaccess.
CREATE now supports all ODBC types and the mSQL TEXT type.
All ODBC 2.5 functions are also supported (added REPEAT). This provides
better portability.
TINYTEXT, TEXT, MEDIUMTEXT and
LONGTEXT. These are actually BLOBtypes, but all searching is
done in case-insensitive fashion.
BLOB fields are now TEXT fields. This only
changes that all searching on strings is done in case-sensitive fashion.
You must do an ALTER TABLE and change the data type to BLOB
if you want to have tests done in case-sensitive fashion.
configure issues.
test-select works.
--enable-unix-socket=pathname option to configure.
SUM() functions.
For example, you can now use SUM(column)/COUNT(column).
PI(), ACOS(), ASIN(), ATAN(), COS(),
SIN() and TAN().
net_print() in `procedure.cc'.
SELECT ... INTO OUTFILE syntax.
GROUP BY and SELECT on key with many values.
mysql_fetch_lengths() sometimes returned incorrect lengths when you used
mysql_use_result(). This affected at least some cases of
mysqldump --quick.
WHERE const op field.
NULL fields.
--pid-file=# option to mysqld.
FROM_UNIXTIME(), originally by Zeev Suraski.
BETWEEN in range optimiser (did only test = of the first
argument).
mysql_errno(), to get the error number of
the error message. This makes error checking in the client much easier.
This makes the new server incompatible with the 3.20.x server when running
without --old-protocol. The client code is backward-compatible.
More information can be found in the `README' file!
sigwait and sigset
defines).
configure should now be able to detect the last argument to
accept().
-O tmp_table_size=# option to mysqld.
FROM_UNIXTIME(timestamp) which returns a date string in
'YYYY-MM-DD HH:MM:DD' format.
SEC_TO_TIME(seconds) which returns a string in
'HH:MM:SS' format.
SUBSTRING_INDEX(), originally by Zeev Suraski.
mysqld doesn't work on it yet.
pthread_create to work.
mysqld doesn't accept hostnames that start with digits followed by a
'.', because the hostname may look like an IP number.
--skip-networking option to mysqld, to allow only socket
connections. (This will not work with MIT-pthreads!)
free() that killed the server on
CREATE DATABASE or DROP DATABASE.
mysqld -O options to better names.
-O join_cache_size=# option to mysqld.
-O max_join_size=# option to mysqld, to be able to set a
limit how big queries (in this case big = slow) one should be able to handle
without specifying SET SQL_BIG_SELECTS=1. A # = is about 10
examined records. The default is ``unlimited''.
TIME, DATE, DATETIME or TIMESTAMP
column to a constant, the constant is converted to a time value before
performing the comparison.
This will make it easier to get ODBC (particularly Access97) to work with
the above types. It should also make dates easier to use and the comparisons
should be quicker than before.
query() in
mysqlperl to take a query with \0 in it.
YYMMDD) didn't work.
UPDATE
clause.
SELECT * INTO OUTFILE, which didn't correctly if the outfile already
existed.
mysql now shows the thread ID when starting or doing a reconnect.
--new, but it crashes core a lot yet...
isam library should be relatively 64-bit clean.
isamchk which can detect and fix more problems.
isamlog.
mysqladmin: you can now do mysqladmin kill 5,6,7,8 to kill
multiple threads.
-O backlog=# option to mysqld.
ALTER TABLE now returns warnings from field conversions.
ASCII().
BETWEEN(a,b,c). Use the standard SQL
syntax instead: expr BETWEEN expr AND expr.
SUM() functions.
tbl_name.field_name in UPDATE.
SELECT DISTINCT when using 'hidden group'. For example:
mysql> SELECT DISTINCT MOD(some_field,10) FROM test
-> GROUP BY some_field;
Note: some_field is normally in the SELECT part. Standard SQL should
require it.
INTERVAL, EXPLAIN, READ,
WRITE, BINARY.
CHAR(num,...).
IN. This uses a binary search to find a match.
LOCK TABLES tbl_name [AS alias] {READ|WRITE} ...
--log-update option to mysqld, to get a log suitable for
incremental updates.
EXPLAIN SELECT ... to get information about how the
optimiser will do the join.
FIELD_TYPE_TINY_BLOB, FIELD_TYPE_MEDIUM_BLOB,
FIELD_TYPE_LONG_BLOB or FIELD_TYPE_VAR_STRING (as
previously returned by mysql_list_fields). You should instead only use
FIELD_TYPE_BLOB or FIELD_TYPE_STRING. If you want exact
types, you should use the command SHOW FIELDS.
0x###### which can be used as a string
(default) or a number.
FIELD_TYPE_CHAR is renamed to FIELD_TYPE_TINY.
DEFAULT values no longer need to be NOT NULL.
ENUM
SET
double or long long.
This will provide the full 64-bit range with bit functions and fix some
conversions that previously could result in precision losses. One should
avoid using unsigned long long columns with full 64-bit range
(numbers bigger than 9223372036854775807) because calculations are done
with signed long long.
ORDER BY will now put NULL field values first. GROUP BY
will also work with NULL values.
WHERE with expressions.
mysql> SELECT * FROM tbl_name
-> WHERE key_part_1="customer"
-> AND key_part_2>=10 AND key_part_2<=10;
Version 3.20 is quite old now, and should be avoided if possible. This information is kept here for historical purposes only.
Changes from 3.20.18 to 3.20.32b are not documented here because the 3.21 release branched here. And the relevant changes are also documented as changes to the 3.21 version.
-p# (remove # directories from path) to isamlog.
All files are written with a relative path from the database directory
Now mysqld shouldn't crash on shutdown when using the
--log-isam option.
mysqlperl version. It is now compatible with msqlperl-0.63.
DBD module available.
STD() (standard deviation).
mysqld server is now compiled by default without debugging
information. This will make the daemon smaller and faster.
--basedir option to
mysqld. All other paths are relative in a normal installation.
BLOB columns sometimes contained garbage when used with a SELECT
on more than one table and ORDER BY.
GROUP BY work as expected
(SQL-99 extension).
Example:
mysql> SELECT id,id+1 FROM table GROUP BY id;
MYSQL_PWD was reversed. Now MYSQL_PWD is
enabled as default in the default release.
mysqld to core dump with
Arithmetic error on SPARC-386.
--unbuffered option to mysql, for new mysqlaccess.
BLOB columns and the functions IS NULL and
IS NOT NULL in the WHERE clause.
max_allowed_packet is now 64K for
the server and 512K for the client. This is mainly used to catch
incorrect packets that could trash all memory. The server limit may be
changed when it is started.
safe_mysqld to check for running daemon.
ELT() function is renamed to FIELD(). The new
ELT() function returns a value based on an index: FIELD()
is the inverse of ELT() Example: ELT(2,"A","B","C") returns
"B". FIELD("B","A","B","C") returns 2.
COUNT(field), where field could have a NULL value, now
works.
SELECT ... GROUP BY.
WHERE with many unoptimisable brace levels.
get_hostname, only the IP is checked.
Previously, you got Access denied.
INSERT INTO ... SELECT ... WHERE could give the error
Duplicated field.
safe_mysqld to make it ``safer''.
LIKE was case-sensitive in some places and case-insensitive in others.
Now LIKE is always case-insensitive.
'#' anywhere on the line.
SET SQL_SELECT_LIMIT=#. See the FAQ for more details.
mysqlaccess script.
FROM_DAYS() and WEEKDAY() to also take a full
TIMESTAMP or DATETIME as argument. Before they only took a
number of type YYYYMMDD or YYMMDD.
UNIX_TIMESTAMP(timestamp_column).
mysqld to work around a bug in MIT-pthreads. This makes multiple
small SELECT operations 20 times faster. Now lock_test.pl should
work.
mysql_FetchHash(handle) to mysqlperl.
mysqlbug script is now distributed built to allow for reporting
bugs that appear during the build with it.
getpwuid() instead of
cuserid().
SELECT optimiser when using many tables with the same
column used as key to different tables.
latin2 and Russian KOI8 character tables.
GRANT command to satisfy Powerbuilder.
packets out of order when using MIT-pthreads.
fcntl() fails. Thanks to Mike Bretz for finding this bug.
termbits from `mysql.cc'. This conflicted with
glibc 2.0.
SELECT as superuser without a database.
SELECT with group calculation to outfile.
-p or --password option to mysql without
an argument, the user is solicited for the password from the tty.
MYSQL_PWD (by Elmar Haneke).
kill to mysqladmin to kill a specific
MySQL thread.
AUTO_INCREMENT key with ALTER_TABLE.
AVG() gave too small value on some SELECTs with
GROUP BY and ORDER BY.
DATETIME type (by Giovanni Maruzzelli
maruzz@matrice.it).
DONT_USE_DEFAULT_FIELDS works.
CREATE INDEX.
DATE, TIME and
TIMESTAMP.
OR of multiple tables (gave empty set).
DATE and TIME types.
SELECT with AND-OR levels.
LIMIT and ORDER BY.
ORDER BY and GROUP BY on items that aren't in the
SELECT list.
(Thanks to Wim Bonis bonis@kiss.de, for pointing this out.)
INSERT.
SELECT ... WHERE ... = NULL.
glibc 2.0. To get glibc to work, you should
add the `gibc-2.0-sigwait-patch' before compiling glibc.
ALTER TABLE when changing a NOT NULL field to
allow NULL values.
CREATE TABLE.
CREATE TABLE now allows FLOAT(4) and FLOAT(8) to mean
FLOAT and DOUBLE.
mysqlaccess by Yves.Carlier@rug.ac.be.
This program shows the access rights for a specific user and the grant
rows that determine this grant.
WHERE const op field (by bonis@kiss.de).
SELECT ... INTO OUTFILE, all temporary tables are ISAM
instead of HEAP to allow big dumps.
ALTER TABLE for SQL-92 compliance.
--port and --socket options to all utility programs and
mysqld.
readdir_r(). Now mysqladmin create database
and mysqladmin drop database should work.
tempnam(). This should fix the ``sort
aborted'' bug.
sql_update. This fixed slow updates
on first connection. (Thanks to Vaclav Bittner for the test.)
INSERT INTO ... SELECT ...
MEDIUMBLOB fixed.
ALTER TABLE and BLOBs.
SELECT ... INTO OUTFILE now creates the file in the current
database directory.
DROP TABLE now can take a list of tables.
DESCRIBE (DESC).
make_binary_distribution.
configure's
C++ link test.
--without-perl option to configure.
ALTER TABLE didn't copy null bit. As a result, fields that were allowed
to have NULL values were always NULL.
CREATE didn't take numbers as DEFAULT.
ALTER TABLE and multi-part keys.
ALTER TABLE, SELECT ... INTO OUTFILE and
LOAD DATA INFILE.
NOW().
File_priv to mysql/user table.
add_file_priv which adds the new field File_priv
to the user table. This script must be executed if you want to
use the new SELECT ... INTO and LOAD DATA INFILE ... commands
with a version of MySQL earlier than 3.20.7.
lock_test.pl test fail.
status command to mysqladmin for short logging.
-k option to mysqlshow, to get key information for a table.
mysqldump.
configure cannot find a -lpthreads
library.
program --help.
RAND([init]).
sql_lex to handle \0 unquoted, but the client can't send
the query through the C API, because it takes a str pointer.
You must use mysql_real_query() to send the query.
mysql_get_client_info().
mysqld now uses the N_MAX_KEY_LENGTH from `nisam.h' as
the maximum allowable key length.
mysql> SELECT filter_nr,filter_nr FROM filter ORDER BY filter_nr;Previously, this resulted in the error:
Column: 'filter_nr' in order clause is ambiguous.
mysql now outputs '\0', '\t', '\n' and '\\'
when encountering ASCII 0, tab, newline or '\' while writing
tab-separated output.
This is to allow printing of binary data in a portable format.
To get the old behaviour, use -r (or --raw).
mysql_fetch_lengths(MYSQL_RES *), which
returns an array of column lengths (of type uint).
IS NULL in WHERE clause.
SELECT option STRAIGHT_JOIN to tell the optimiser that
it should join tables in the given order.
'--' in `mysql.cc'
(Postgres syntax).
SELECT expressions and table columns in a SELECT
which are not used in the group part. This makes it efficient to implement
lookups. The column that is used should be a constant for each group because
the value is calculated only once for the first row that is found for a group.
mysql> SELECT id,lookup.text,SUM(*) FROM test,lookup
-> WHERE test.id=lookup.id GROUP BY id;
SUM(function) (could cause a core dump).
AUTO_INCREMENT placement in the SQL query:
INSERT INTO table (auto_field) VALUES (0);inserted 0, but it should insert an
AUTO_INCREMENT value.
mysql now allows doubled '' or "" within strings for
embedded ' or ".
EXP(), LOG(), SQRT(), ROUND(), CEILING().
configure source now compiles a thread-free client library
-lmysqlclient. This is the only library that needs to be linked
with client applications. When using the binary releases, you must
link with -lmysql -lmysys -ldbug -lmystrings as before.
readline library from bash-2.0.
configure and makefiles (and related source).
VPATH. Tested with GNU Make 3.75.
safe_mysqld and mysql.server changed to be more compatible
between the source and the binary releases.
LIMIT now takes one or two numeric arguments.
If one argument is given, it indicates the maximum number of rows in
a result. If two arguments are given, the first argument indicates the offset
of the first row to return, the second is the maximum number of rows.
With this it's easy to do a poor man's next page/previous page WWW
application.
FIELDS() to ELT().
Changed SQL function INTERVALL() to INTERVAL().
SHOW COLUMNS a synonym for SHOW FIELDS.
Added compatibility syntax FRIEND KEY to CREATE TABLE. In
MySQL, this creates a non-unique key on the given columns.
CREATE INDEX and DROP INDEX as compatibility functions.
In MySQL, CREATE INDEX only checks if the index exists and
issues an error if it doesn't exist. DROP INDEX always succeeds.
sql_acl (core on new connection).
host, user and db tables from database test
in the distribution.
FIELD_TYPE_CHAR can now be signed (-128 to 127) or unsigned (0 to 255)
Previously, it was always unsigned.
CONCAT() and WEEKDAY().
mysqld to be compiled with SunPro
compiler.
'(' immediately after the function name
(no intervening space).
For example, 'USER(' is regarded as beginning a function call, and
'USER (' is regarded as an identifier USER followed by a
'(', not as a function call.
configure and Automake.
It will make porting much easier. The readline library is included
in the distribution.
DBD will follow when the new DBD code
is ported.
mysqld can now be started with Swedish
or English (default) error messages.
INSERT(), RTRIM(), LTRIM() and
FORMAT().
mysqldump now works correctly for all field types (even
AUTO_INCREMENT). The format for SHOW FIELDS FROM tbl_name
is changed so the Type column contains information suitable for
CREATE TABLE. In previous releases, some CREATE TABLE
information had to be patched when re-creating tables.
BLOB and TIMESTAMP) are corrected.
TIMESTAMP now returns different date information depending on its
create length.
'_'.
Version 3.19 is quite old now, and should be avoided if possible. This information is kept here for historical purposes only.
DATABASE(), USER(), POW(),
LOG10() (needed for ODBC).
WHERE with an ORDER BY on fields from only one table,
the table is now preferred as first table in a multi-join.
HAVING and IS NULL or IS NOT NULL now works.
SUM(),
AVG()...) didn't work together. Fixed.
mysqldump: Didn't send password to server.
'Locked' to process list as info if a query is
locked by another query.
IF(arg,syntax_error,syntax_error) crashed.
CEILING(), ROUND(), EXP(), LOG() and SQRT().
BETWEEN to handle strings.
SELECT with grouping on BLOB columns not to return
incorrect BLOB info. Grouping, sorting and distinct on BLOB
columns will not yet work as
expected (probably it will group/sort by the first 7 characters in the
BLOB). Grouping on formulas with a fixed string size (use MID()
on a BLOB) should work.
BLOB
fields, the BLOB was garbage on output.
DISTINCT with calculated columns.
This appendix will help you port MySQL to other operating systems. Do check the list of currently supported operating systems first. See section 2.2.5 Operating Systems Supported by MySQL. If you have created a new port of MySQL, please let us know so that we can list it here and on our web site (http://www.mysql.com/), recommending it to other users.
Note: If you create a new port of MySQL, you are free to copy and
distribute it under the GPL license, but it does not make you a
copyright holder of MySQL.
A working Posix thread library is needed for the server. On Solaris 2.5 we use Sun PThreads (the native thread support in 2.4 and earlier versions is not good enough), on Linux we use LinuxThreads by Xavier Leroy, Xavier.Leroy@inria.fr.
The hard part of porting to a new Unix variant without good native thread support is probably to port MIT-pthreads. See `mit-pthreads/README' and Programming POSIX Threads (http://www.humanfactor.com/pthreads/).
Up to MySQL 4.0.2, the MySQL distribution included a patched version of Chris Provenzano's Pthreads from MIT (see the MIT Pthreads web page at http://www.mit.edu/afs/sipb/project/pthreads/ and a programming introduction at http://www.mit.edu:8001/people/proven/IAP_2000/). These can be used for some operating systems that do not have POSIX threads. See section 2.3.6 MIT-pthreads Notes.
It is also possible to use another user level thread package named FSU Pthreads (see http://moss.csc.ncsu.edu/~mueller/pthreads/). This implementation is being used for the SCO port.
See the `thr_lock.c' and `thr_alarm.c' programs in the `mysys' directory for some tests/examples of these problems.
Both the server and the client need a working C++ compiler. We use gcc
on many platforms. Other compilers that are known to work are SPARCworks,
Sun Forte, Irix cc, HP-UX aCC, IBM AIX xlC_r), Intel
ecc and Compaq cxx).
To compile only the client use ./configure --without-server.
There is currently no support for only compiling the server, nor is it likly to be added unless someone has a good reason for it.
If you want/need to change any `Makefile' or the configure script you will also need GNU Automake and Autoconf. See section 2.3.4 Installing from the Development Source Tree.
All steps needed to remake everything from the most basic files.
/bin/rm */.deps/*.P /bin/rm -f config.cache aclocal autoheader aclocal automake autoconf ./configure --with-debug=full --prefix='your installation directory' # The makefiles generated above need GNU make 3.75 or newer. # (called gmake below) gmake clean all install init-db
If you run into problems with a new port, you may have to do some debugging of MySQL! See section E.1 Debugging a MySQL server.
Note: before you start debugging mysqld, first get the test
programs mysys/thr_alarm and mysys/thr_lock to work. This
will ensure that your thread installation has even a remote chance to work!
If you are using some functionality that is very new in MySQL,
you can try to run mysqld with the --skip-new (which will disable all
new, potentially unsafe functionality) or with --safe-mode which
disables a lot of optimisation that may cause problems.
See section A.4.1 What To Do If MySQL Keeps Crashing.
If mysqld doesn't want to start, you should check that you don't have
any `my.cnf' files that interfere with your setup!
You can check your `my.cnf' arguments with mysqld --print-defaults
and avoid using them by starting with mysqld --no-defaults ....
If mysqld starts to eat up CPU or memory or if it ``hangs'', you
can use mysqladmin processlist status to find out if someone is
executing a query that takes a long time. It may be a good idea to
run mysqladmin -i10 processlist status in some window if you are
experiencing performance problems or problems when new clients can't connect.
The command mysqladmin debug will dump some information about
locks in use, used memory and query usage to the mysql log file. This
may help solve some problems. This command also provides some useful
information even if you haven't compiled MySQL for debugging!
If the problem is that some tables are getting slower and slower you
should try to optimise the table with OPTIMIZE TABLE or
myisamchk. See section 4 Database Administration. You should also
check the slow queries with EXPLAIN.
You should also read the OS-specific section in this manual for problems that may be unique to your environment. See section 2.6 Operating System Specific Notes.
If you have some very specific problem, you can always try to debug
MySQL. To do this you must configure MySQL with the
--with-debug or the --with-debug=full option. You can check
whether MySQL was compiled with debugging by doing:
mysqld --help. If the --debug flag is listed with the
options then you have debugging enabled. mysqladmin ver also
lists the mysqld version as mysql ... --debug in this case.
If you are using gcc or egcs, the recommended configure line is:
CC=gcc CFLAGS="-O2" CXX=gcc CXXFLAGS="-O2 -felide-constructors \ -fno-exceptions -fno-rtti" ./configure --prefix=/usr/local/mysql \ --with-debug --with-extra-charsets=complex
This will avoid problems with the libstdc++ library and with C++
exceptions (many compilers have problems with C++ exceptions in threaded
code) and compile a MySQL version with support for all character sets.
If you suspect a memory overrun error, you can configure MySQL
with --with-debug=full, which will install a memory allocation
(SAFEMALLOC) checker. Running with SAFEMALLOC is however
quite slow, so if you get performance problems you should start
mysqld with the --skip-safemalloc option. This will
disable the memory overrun checks for each call to malloc and
free.
If mysqld stops crashing when you compile it with
--with-debug, you have probably found a compiler bug or a timing
bug within MySQL. In this case you can try to add -g to
the CFLAGS and CXXFLAGS variables above and not use
--with-debug. If mysqld now dies, you can at least attach
to it with gdb or use gdb on the core file to find out
what happened.
When you configure MySQL for debugging you automatically enable a
lot of extra safety check functions that monitor the health of mysqld.
If they find something ``unexpected,'' an entry will be written to
stderr, which safe_mysqld directs to the error log! This also
means that if you are having some unexpected problems with MySQL and
are using a source distribution, the first thing you should do is to
configure MySQL for debugging! (The second thing, of course, is to
send mail to mysql@lists.mysql.com and ask for help. Please use the
mysqlbug script for all bug reports or questions regarding the
MySQL version you are using!
In the Windows MySQL distribution, mysqld.exe is by
default compiled with support for trace files.
If the mysqld server doesn't start or if you can cause the
mysqld server to crash quickly, you can try to create a trace
file to find the problem.
To do this you have to have a mysqld that is compiled for debugging.
You can check this by executing mysqld -V. If the version number
ends with -debug, it's compiled with support for trace files.
Start the mysqld server with a trace log in `/tmp/mysqld.trace'
(or `C:\mysqld.trace' on Windows):
mysqld --debug
On Windows you should also use the --standalone flag to not start
mysqld as a service:
In a DOS window do:
mysqld --debug --standalone
After this you can use the mysql.exe command-line tool in a
second DOS window to reproduce the problem. You can take down the above
mysqld server with mysqladmin shutdown.
Note that the trace file will get very big! If you want to have a smaller trace file, you can use something like:
mysqld --debug=d,info,error,query,general,where:O,/tmp/mysqld.trace
which only prints information with the most interesting tags in `/tmp/mysqld.trace'.
If you make a bug report about this, please only send the lines from the trace file to the appropriate mailing list where something seems to go wrong! If you can't locate the wrong place, you can ftp the trace file, together with a full bug report, to ftp://support.mysql.com/pub/mysql/secret/ so that a MySQL developer can take a look a this.
The trace file is made with the DBUG package by Fred Fish. See section E.3 The DBUG Package.
On most systems you can also start mysqld from gdb to get
more information if mysqld crashes.
With some older gdb versions on Linux you must use run
--one-thread if you want to be able to debug mysqld threads. In
this case you can only have one thread active at a time. We recommend you
to upgrade to gdb 5.1 ASAP as thread debugging works much better with this
version!
When running mysqld under gdb, you should disable the stack trace
with --skip-stack-trace to be able to catch segfaults within gdb.
It's very hard to debug MySQL under gdb if you do a lot of
new connections the whole time as gdb doesn't free the memory for
old threads. You can avoid this problem by starting mysqld with
-O thread_cache_size= 'max_connections +1'. In most cases just
using -O thread_cache_size=5' will help a lot!
If you want to get a core dump on Linux if mysqld dies with a
SIGSEGV signal, you can start mysqld with the --core-file option.
This core file can be used to make a backtrace that may help you
find out why mysqld died:
shell> gdb mysqld core gdb> backtrace full gdb> exit
See section A.4.1 What To Do If MySQL Keeps Crashing.
If you are using gdb 4.17.x or above on Linux, you should install a `.gdb' file, with the following information, in your current directory:
set print sevenbit off handle SIGUSR1 nostop noprint handle SIGUSR2 nostop noprint handle SIGWAITING nostop noprint handle SIGLWP nostop noprint handle SIGPIPE nostop handle SIGALRM nostop handle SIGHUP nostop handle SIGTERM nostop noprint
If you have problems debugging threads with gdb, you should download gdb 5.x and try this instead. The new gdb version has very improved thread handling!
Here is an example how to debug mysqld:
shell> gdb /usr/local/libexec/mysqld gdb> run ... backtrace full # Do this when mysqld crashes
Include the above output in a mail generated with mysqlbug and
mail this to mysql@lists.mysql.com.
If mysqld hangs you can try to use some system tools like
strace or /usr/proc/bin/pstack to examine where
mysqld has hung.
strace /tmp/log libexec/mysqld
If you are using the Perl DBI interface, you can turn on
debugging information by using the trace method or by
setting the DBI_TRACE environment variable.
See section 8.5.2 The DBI Interface.
On some operating systems, the error log will contain a stack trace if
mysqld dies unexpectedly. You can use this to find out where (and
maybe why) mysqld died. See section 4.9.1 The Error Log. To get a stack trace,
you must not compile mysqld with the -fomit-frame-pointer
option to gcc. See section E.1.1 Compiling MYSQL for Debugging.
If the error file contains something like the following:
mysqld got signal 11; The manual section 'Debugging a MySQL server' tells you how to use a stack trace and/or the core file to produce a readable backtrace that may help in finding out why mysqld died Attemping backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong stack range sanity check, ok, backtrace follows 0x40077552 0x81281a0 0x8128f47 0x8127be0 0x8127995 0x8104947 0x80ff28f 0x810131b 0x80ee4bc 0x80c3c91 0x80c6b43 0x80c1fd9 0x80c1686
you can find where mysqld died by doing the following:
mysqld server:
nm -n libexec/mysqld > /tmp/mysqld.symNote that most MySQL binary distributions (except for the "debug" packages, where this information is included inside of the binaries themselves) already ship with the above file, named
mysqld.sym.gz.
In this case you can simply unpack it by doing:
gunzip < bin/mysqld.sym.gz > /tmp/mysqld.sym
resolve_stack_dump -s /tmp/mysqld.sym -n mysqld.stack.
This will print out where mysqld died. If this doesn't help you
find out why mysqld died, you should make a bug report and include
the output from the above command with the bug report.
Note however that in most cases it will not help us to just have a stack
trace to find the reason for the problem. To be able to locate the bug
or provide a workaround, we would in most cases need to know the query
that killed mysqld and preferable a test case so that we can
repeat the problem! See section 1.7.1.3 How to Report Bugs or Problems.
Note that before starting mysqld with --log you should
check all your tables with myisamchk.
See section 4 Database Administration.
If mysqld dies or hangs, you should start mysqld with
--log. When mysqld dies again, you can examine the end of
the log file for the query that killed mysqld.
If you are using --log without a file name, the log is stored in
the database directory as 'hostname'.log In most cases it's the last
query in the log file that killed mysqld, but if possible you
should verify this by restarting mysqld and executing the found
query from the mysql command-line tools. If this works, you
should also test all complicated queries that didn't complete.
You can also try the command EXPLAIN on all SELECT
statements that takes a long time to ensure that mysqld is using
indexes properly. See section 5.2.1 EXPLAIN Syntax (Get Information About a SELECT).
You can find the queries that take a long time to execute by starting
mysqld with --log-slow-queries. See section 4.9.5 The Slow Query Log.
If you find the text mysqld restarted in the error log file
(normally named `hostname.err') you have probably found a query
that causes mysqld to fail. If this happens you should check all
your tables with myisamchk (see section 4 Database Administration),
and test the queries in the MySQL log files to see if one doesn't
work. If you find such a query, try first upgrading to the newest
MySQL version. If this doesn't help and you can't find anything
in the mysql mail archive, you should report the bug to
mysql@lists.mysql.com. Links to mail archives are available
online at http://lists.mysql.com/.
If you have started mysqld with myisam-recover,
MySQL will automatically check and try to repair MyISAM
tables if they are marked as 'not closed properly' or 'crashed'. If
this happens, MySQL will write an entry in the
hostname.err file 'Warning: Checking table ...' which is
followed by Warning: Repairing table if the table needs to be
repaired. If you get a lot of these errors, without mysqld having
died unexpectedly just before, then something is wrong and needs to
be investigated further. See section 4.1.1 mysqld Command-line Options.
It's of course not a good sign if mysqld did died unexpectedly,
but in this case one shouldn't investigate the Checking table...
messages but instead try to find out why mysqld died.
If you get corrupted tables or if mysqld always fails after some
update commands, you can test if this bug is reproducible by doing the
following:
mysqladmin shutdown).
myisamchk -s database/*.MYI. Repair any
wrong tables with myisamchk -r database/table.MYI.
mysqld with --log-bin. See section 4.9.4 The Binary Log.
If you want to find a query that crashes mysqld, you should use
--log --log-bin.
mysqld server.
mysqld server without --log-bin
mysqlbinlog update-log-file | mysql.
The update log is saved in the MySQL database directory with
the name hostname-bin.#.
mysqld to die with the
above command, you have found reproducible bug that should be easy to
fix! FTP the tables and the binary log to
ftp://support.mysql.com/pub/mysql/secret/ and enter it into
our bugs system at http://bugs.mysql.com/.
If you are a support customer), you can also support@mysql.com to
alert the MySQL team about the problem and have it fixed as soon as possible.
You can also use the script mysql_find_rows to just execute some of the
update statements if you want to narrow down the problem.
To be able to debug a MySQL client with the integrated debug package,
you should configure MySQL with --with-debug or
--with-debug=full. See section 2.3.3 Typical configure Options.
Before running a client, you should set the MYSQL_DEBUG environment
variable:
shell> MYSQL_DEBUG=d:t:O,/tmp/client.trace shell> export MYSQL_DEBUG
This causes clients to generate a trace file in `/tmp/client.trace'.
If you have problems with your own client code, you should attempt to
connect to the server and run your query using a client that is known to
work. Do this by running mysql in debugging mode (assuming you
have compiled MySQL with debugging on):
shell> mysql --debug=d:t:O,/tmp/client.trace
This will provide useful information in case you mail a bug report. See section 1.7.1.3 How to Report Bugs or Problems.
If your client crashes at some 'legal' looking code, you should check that your `mysql.h' include file matches your mysql library file. A very common mistake is to use an old `mysql.h' file from an old MySQL installation with new MySQL library.
The MySQL server and most MySQL clients are compiled with the DBUG package originally made by Fred Fish. When one has configured MySQL for debugging, this package makes it possible to get a trace file of what the program is debugging. See section E.1.2 Creating Trace Files.
One uses the debug package by invoking the program with the
--debug="..." or the -#... option.
Most MySQL programs has a default debug string that will be
used if you don't specify an option to --debug. The default
trace file is usually /tmp/programname.trace on Unix and
\programname.trace on Windows.
The debug control string is a sequence of colon separated fields as follows:
<field_1>:<field_2>:...:<field_N>
Each field consists of a mandatory flag character followed by an optional "," and comma-separated list of modifiers:
flag[,modifier,modifier,...,modifier]
The currently recognised flag characters are:
| Flag | Description |
| d | Enable output from DBUG_<N> macros for the current state. May be followed by a list of keywords which selects output only for the DBUG macros with that keyword. An empty list of keywords implies output for all macros. |
| D | Delay after each debugger output line. The argument is the number of tenths of seconds to delay, subject to machine capabilities. That is, -#D,20 is delay two seconds.
|
| f | Limit debugging and/or tracing, and profiling to the list of named functions. Note that a null list will disable all functions. The appropriate "d" or "t" flags must still be given, this flag only limits their actions if they are enabled. |
| F | Identify the source file name for each line of debug or trace output. |
| i | Identify the process with the pid or thread id for each line of debug or trace output. |
| g | Enable profiling. Create a file called 'dbugmon.out' containing information that can be used to profile the program. May be followed by a list of keywords that select profiling only for the functions in that list. A null list implies that all functions are considered. |
| L | Identify the source file line number for each line of debug or trace output. |
| n | Print the current function nesting depth for each line of debug or trace output. |
| N | Number each line of dbug output. |
| o | Redirect the debugger output stream to the specified file. The default output is stderr. |
| O | As o but the file is really flushed between each write. When needed the file is closed and reopened between each write.
|
| p | Limit debugger actions to specified processes. A process must be identified with the DBUG_PROCESS macro and match one in the list for debugger actions to occur. |
| P | Print the current process name for each line of debug or trace output. |
| r | When pushing a new state, do not inherit the previous state's function nesting level. Useful when the output is to start at the left margin. |
| S | Do function _sanity(_file_,_line_) at each debugged function until _sanity() returns something that differs from 0. (Mostly used with safemalloc to find memory leaks) |
| t | Enable function call/exit trace lines. May be followed by a list (containing only one modifier) giving a numeric maximum trace level, beyond which no output will occur for either debugging or tracing macros. The default is a compile time option. |
Some examples of debug control strings which might appear on a shell command-line (the "-#" is typically used to introduce a control string to an application program) are:
-#d:t -#d:f,main,subr1:F:L:t,20 -#d,input,output,files:n -#d:t:i:O,\\mysqld.trace
In MySQL, common tags to print (with the d option) are:
enter,exit,error,warning,info and
loop.
Currently MySQL only supports table locking for
ISAM/MyISAM and HEAP tables,
page-level locking for BDB tables and
row-level locking for InnoDB tables.
See section 5.3.1 How MySQL Locks Tables.
With MyISAM tables one can freely mix INSERT and
SELECT without locks, if the INSERTs are non-conflicting
(i.e. whenever they append to the end of the table file rather than
filling freespace from deleted rows/data).
Starting in version 3.23.33, you can analyse the table lock contention
on your system by checking Table_locks_waited and
Table_locks_immediate environment variables.
To decide if you want to use a table type with row-level locking, you will want to look at what the application does and what the select/update pattern of the data is.
Pros for row locking:
Cons:
GROUP
BY on a large part of the data or if one has to often scan the whole table.
Table locks are superior to page level / row level locks in the following cases:
UPDATE table_name SET column=value WHERE unique_key# DELETE FROM table_name WHERE unique_key=#
SELECT combined with INSERT (and very few UPDATEs
and DELETEs).
GROUP BY on the whole table without any writers.
Other options than row / page level locking:
Versioning (like we use in MySQL for concurrent inserts) where you can have one writer at the same time as many readers. This means that the database/table supports different views for the data depending on when one started to access it. Other names for this are time travel, copy on write or copy on demand.
Copy on demand is in many case much better than page or row level locking; the worst case does, however, use much more memory than when using normal locks.
Instead of using row level locks one can use application level locks (like get_lock/release_lock in MySQL). This works of course only in well-behaved applications.
In many cases one can do an educated guess which locking type is best for the application, but generally it's very hard to say that a given lock type is better than another; everything depends on the application and different part of the application may require different lock types.
Here are some tips about locking in MySQL:
Most web applications do lots of selects, very few deletes, updates mainly on keys, and inserts in some specific tables. The base MySQL setup is very well tuned for this.
Concurrent users are not a problem if one doesn't mix updates with selects that need to examine many rows in the same table.
If one mixes inserts and deletes on the same table then INSERT DELAYED
may be of great help.
One can also use LOCK TABLES to speed up things (many updates within
a single lock is much faster than updates without locks). Splitting
thing to different tables will also help.
If you get speed problems with the table locks in MySQL, you
may be able to solve these by converting some of your tables to InnoDB
or BDB tables.
See section 7.5 InnoDB Tables. See section 7.6 BDB or BerkeleyDB Tables.
The optimisation section in the manual covers a lot of different aspects of how to tune applications. See section 5.2.12 Other Optimisation Tips.
I have tried to use the RTS thread packages with MySQL but stumbled on the following problems:
They use an old version of a lot of POSIX calls and it is very tedious to make wrappers for all functions. I am inclined to think that it would be easier to change the thread libraries to the newest POSIX specification.
Some wrappers are already written. See `mysys/my_pthread.c' for more info.
At least the following should be changed:
pthread_get_specific should use one argument.
sigwait should take two arguments.
A lot of functions (at least pthread_cond_wait,
pthread_cond_timedwait)
should return the error code on error. Now they return -1 and set errno.
Another problem is that user-level threads use the ALRM signal and this
aborts a lot of functions (read, write, open...).
MySQL should do a retry on interrupt on all of these but it is
not that easy to verify it.
The biggest unsolved problem is the following:
To get thread-level alarms I changed `mysys/thr_alarm.c' to wait between
alarms with pthread_cond_timedwait(), but this aborts with error
EINTR. I tried to debug the thread library as to why this happens,
but couldn't find any easy solution.
If someone wants to try MySQL with RTS threads I suggest the following:
-DHAVE_rts_threads.
thr_alarm.
thr_alarm. If it runs without any ``warning'', ``error'' or aborted
messages, you are on the right track. Here is a successful run on
Solaris:
Main thread: 1 Thread 0 (5) started Thread: 5 Waiting process_alarm Thread 1 (6) started Thread: 6 Waiting process_alarm process_alarm thread_alarm Thread: 6 Slept for 1 (1) sec Thread: 6 Waiting process_alarm process_alarm thread_alarm Thread: 6 Slept for 2 (2) sec Thread: 6 Simulation of no alarm needed Thread: 6 Slept for 0 (3) sec Thread: 6 Waiting process_alarm process_alarm thread_alarm Thread: 6 Slept for 4 (4) sec Thread: 6 Waiting process_alarm thread_alarm Thread: 5 Slept for 10 (10) sec Thread: 5 Waiting process_alarm process_alarm thread_alarm Thread: 6 Slept for 5 (5) sec Thread: 6 Waiting process_alarm process_alarm ... thread_alarm Thread: 5 Slept for 0 (1) sec end
MySQL is very dependent on the thread package used. So when choosing a good platform for MySQL, the thread package is very important.
There are at least three types of thread packages:
ps may show the different threads. If one thread aborts, the
whole process aborts. Most system calls are thread-safe and should
require very little overhead. Solaris, HP-UX, AIX and OSF/1 have kernel
threads.
In some systems kernel threads are managed by integrating user level threads in the system libraries. In such cases, the thread switching can only be done by the thread library and the kernel isn't really ``thread aware''.
Here is a list of all the environment variables that are used directly or indirectly by MySQL. Most of these can also be found in other places in this manual.
Note that any options on the command-line will take precedence over values specified in configuration files and environment variables, and values in configuration files take precedence over values in environment variables.
In many cases it's preferable to use a configure file instead of environment variables to modify the behaviour of MySQL. See section 4.1.2 `my.cnf' Option Files.
| Variable | Description |
CCX | Set this to your C++ compiler when running configure. |
CC | Set this to your C compiler when running configure. |
CFLAGS | Flags for your C compiler when running configure. |
CXXFLAGS | Flags for your C++ compiler when running configure. |
DBI_USER | The default user name for Perl DBI. |
DBI_TRACE | Used when tracing Perl DBI. |
HOME | The default path for the mysql history file is `$HOME/.mysql_history'.
|
LD_RUN_PATH | Used to specify where your `libmysqlclient.so' is. |
MYSQL_DEBUG | Debug-trace options when debugging. |
MYSQL_HISTFILE | The path to the mysql history file.
|
MYSQL_HOST | Default host name used by the mysql command-line client.
|
MYSQL_PS1 | Command prompt to use in the mysql command-line client. See section 4.8.2 mysql, The Command-line Tool.
|
MYSQL_PWD | The default password when connecting to mysqld. Note that use of this is insecure!
|
MYSQL_TCP_PORT | The default TCP/IP port. |
MYSQL_UNIX_PORT | The default socket; used for connections to localhost.
|
PATH | Used by the shell to finds the MySQL programs. |
TMPDIR | The directory where temporary tables/files are created. |
TZ | This should be set to your local time zone. See section A.4.6 Time Zone Problems. |
UMASK_DIR | The user-directory creation mask when creating directories. Note that this is ANDed with UMASK!
|
UMASK | The user-file creation mask when creating files. |
USER | The default user on Windows to use when connecting to mysqld.
|
A regular expression (regex) is a powerful way of specifying a complex search.
MySQL uses Henry Spencer's implementation of regular expressions, which is aimed at conformance with POSIX 1003.2. MySQL uses the extended version.
This is a simplistic reference that skips the details. To get more exact
information, see Henry Spencer's regex(7) manual page that is
included in the source distribution. See section C Credits.
A regular expression describes a set of strings. The simplest regexp is
one that has no special characters in it. For example, the regexp
hello matches hello and nothing else.
Non-trivial regular expressions use certain special constructs so that
they can match more than one string. For example, the regexp
hello|word matches either the string hello or the string
word.
As a more complex example, the regexp B[an]*s matches any of the
strings Bananas, Baaaaas, Bs, and any other string
starting with a B, ending with an s, and containing any
number of a or n characters in between.
A regular expression may use any of the following special characters/constructs:
^
mysql> SELECT "fo\nfo" REGEXP "^fo$"; -> 0 mysql> SELECT "fofo" REGEXP "^fo"; -> 1
$
mysql> SELECT "fo\no" REGEXP "^fo\no$"; -> 1 mysql> SELECT "fo\no" REGEXP "^fo$"; -> 0
.
mysql> SELECT "fofo" REGEXP "^f.*"; -> 1 mysql> SELECT "fo\nfo" REGEXP "^f.*"; -> 1
a*
a characters.
mysql> SELECT "Ban" REGEXP "^Ba*n"; -> 1 mysql> SELECT "Baaan" REGEXP "^Ba*n"; -> 1 mysql> SELECT "Bn" REGEXP "^Ba*n"; -> 1
a+
a characters.
mysql> SELECT "Ban" REGEXP "^Ba+n"; -> 1 mysql> SELECT "Bn" REGEXP "^Ba+n"; -> 0
a?
a character.
mysql> SELECT "Bn" REGEXP "^Ba?n"; -> 1 mysql> SELECT "Ban" REGEXP "^Ba?n"; -> 1 mysql> SELECT "Baan" REGEXP "^Ba?n"; -> 0
de|abc
de or abc.
mysql> SELECT "pi" REGEXP "pi|apa"; -> 1 mysql> SELECT "axe" REGEXP "pi|apa"; -> 0 mysql> SELECT "apa" REGEXP "pi|apa"; -> 1 mysql> SELECT "apa" REGEXP "^(pi|apa)$"; -> 1 mysql> SELECT "pi" REGEXP "^(pi|apa)$"; -> 1 mysql> SELECT "pix" REGEXP "^(pi|apa)$"; -> 0
(abc)*
abc.
mysql> SELECT "pi" REGEXP "^(pi)*$"; -> 1 mysql> SELECT "pip" REGEXP "^(pi)*$"; -> 0 mysql> SELECT "pipi" REGEXP "^(pi)*$"; -> 1
{1}
{2,3}
a*
a{0,}.
a+
a{1,}.
a?
a{0,1}.
i and no comma matches a sequence of exactly i matches of
the atom. An atom followed by a bound containing one integer i
and a comma matches a sequence of i or more matches of the atom.
An atom followed by a bound containing two integers i and
j matches a sequence of i through j (inclusive)
matches of the atom.
Both arguments must be in the range from 0 to RE_DUP_MAX
(default 255), inclusive. If there are two arguments, the second must be
greater than or equal to the first.
[a-dX]
[^a-dX]
a, b,
c, d or X. To include a literal ] character,
it must immediately follow the opening bracket [. To include a
literal - character, it must be written first or last. So
[0-9] matches any decimal digit. Any character that does not have
a defined meaning inside a [] pair has no special meaning and
matches only itself.
mysql> SELECT "aXbc" REGEXP "[a-dXYZ]"; -> 1 mysql> SELECT "aXbc" REGEXP "^[a-dXYZ]$"; -> 0 mysql> SELECT "aXbc" REGEXP "^[a-dXYZ]+$"; -> 1 mysql> SELECT "aXbc" REGEXP "^[^a-dXYZ]+$"; -> 0 mysql> SELECT "gheis" REGEXP "^[^a-dXYZ]+$"; -> 1 mysql> SELECT "gheisa" REGEXP "^[^a-dXYZ]+$"; -> 0
[[.characters.]]
ch
collating element, then the regular expression [[.ch.]]*c matches the
first five characters of chchcc.
[=character_class=]
o and (+) are the members of an
equivalence class, then [[=o=]], [[=(+)=]], and
[o(+)] are all synonymous. An equivalence class may not be an
endpoint of a range.
[:character_class:]
[: and :] stands for the list of all characters belonging
to that class. Standard character class names are:
| Name | Name | Name |
| alnum | digit | punct |
| alpha | graph | space |
| blank | lower | upper |
| cntrl | xdigit |
ctype(3) manual
page. A locale may provide others. A character class may not be used as an
endpoint of a range.
mysql> SELECT "justalnums" REGEXP "[[:alnum:]]+"; -> 1 mysql> SELECT "!!" REGEXP "[[:alnum:]]+"; -> 0
[[:<:]]
[[:>:]]
ctype(3)) or an underscore
(_).
mysql> SELECT "a word a" REGEXP "[[:<:]]word[[:>:]]"; -> 1 mysql> SELECT "a xword a" REGEXP "[[:<:]]word[[:>:]]"; -> 0
mysql> SELECT "weeknights" REGEXP "^(wee|week)(knights|nights)$"; -> 1
Version 2, June 1991
Copyright © 1989, 1991 Free Software Foundation, Inc. 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Library General Public License instead.) You can apply it to your programs, too.
When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it.
For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights.
We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software.
Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations.
Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all.
The precise terms and conditions for copying, distribution and modification follow.
10.6 NO WARRANTY
If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the ``copyright'' line and a pointer to where the full notice is found.
one line to give the program's name and a brief idea of what it does. Copyright (C) yyyy name of author This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
Also add information on how to contact you by electronic and paper mail.
If the program is interactive, make it output a short notice like this when it starts in an interactive mode:
Gnomovision version 69, Copyright (C) 19yy name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program.
You should also get your employer (if you work as a programmer) or your school, if any, to sign a ``copyright disclaimer'' for the program, if necessary. Here is a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. signature of Ty Coon, 1 April 1989 Ty Coon, President of Vice
This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Library General Public License instead of this License.
Version 2.1, February 1999
Copyright © 1991, 1999 Free Software Foundation, Inc. 59 Temple Place -- Suite 330, Boston, MA 02111-1307, USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. [This is the first released version of the Lesser GPL. It also counts as the successor of the GNU Library Public License, version 2, hence the version number 2.1.]
The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public Licenses are intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users.
This license, the Lesser General Public License, applies to some specially designated software--typically libraries--of the Free Software Foundation and other authors who decide to use it. You can use it too, but we suggest you first think carefully about whether this license or the ordinary General Public License is the better strategy to use in any particular case, based on the explanations below.
When we speak of free software, we are referring to freedom of use, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish); that you receive source code or can get it if you want it; that you can change the software and use pieces of it in new free programs; and that you are informed that you can do these things.
To protect your rights, we need to make restrictions that forbid distributors to deny you these rights or to ask you to surrender these rights. These restrictions translate to certain responsibilities for you if you distribute copies of the library or if you modify it.
For example, if you distribute copies of the library, whether gratis or for a fee, you must give the recipients all the rights that we gave you. You must make sure that they, too, receive or can get the source code. If you link other code with the library, you must provide complete object files to the recipients, so that they can relink them with the library after making changes to the library and recompiling it. And you must show them these terms so they know their rights.
We protect your rights with a two-step method: (1) we copyright the library, and (2) we offer you this license, which gives you legal permission to copy, distribute and/or modify the library.
To protect each distributor, we want to make it very clear that there is no warranty for the free library. Also, if the library is modified by someone else and passed on, the recipients should know that what they have is not the original version, so that the original author's reputation will not be affected by problems that might be introduced by others.
Finally, software patents pose a constant threat to the existence of any free program. We wish to make sure that a company cannot effectively restrict the users of a free program by obtaining a restrictive license from a patent holder. Therefore, we insist that any patent license obtained for a version of the library must be consistent with the full freedom of use specified in this license.
Most GNU software, including some libraries, is covered by the ordinary GNU General Public License. This license, the GNU Lesser General Public License, applies to certain designated libraries, and is quite different from the ordinary General Public License. We use this license for certain libraries in order to permit linking those libraries into non-free programs.
When a program is linked with a library, whether statically or using a shared library, the combination of the two is legally speaking a combined work, a derivative of the original library. The ordinary General Public License therefore permits such linking only if the entire combination fits its criteria of freedom. The Lesser General Public License permits more lax criteria for linking other code with the library.
We call this license the Lesser General Public License because it does Less to protect the user's freedom than the ordinary General Public License. It also provides other free software developers Less of an advantage over competing non-free programs. These disadvantages are the reason we use the ordinary General Public License for many libraries. However, the Lesser license provides advantages in certain special circumstances.
For example, on rare occasions, there may be a special need to encourage the widest possible use of a certain library, so that it becomes a de-facto standard. To achieve this, non-free programs must be allowed to use the library. A more frequent case is that a free library does the same job as widely used non-free libraries. In this case, there is little to gain by limiting the free library to free software only, so we use the Lesser General Public License.
In other cases, permission to use a particular library in non-free programs enables a greater number of people to use a large body of free software. For example, permission to use the GNU C Library in non-free programs enables many more people to use the whole GNU operating system, as well as its variant, the GNU/Linux operating system.
Although the Lesser General Public License is Less protective of the users' freedom, it does ensure that the user of a program that is linked with the Library has the freedom and the wherewithal to run that program using a modified version of the Library.
The precise terms and conditions for copying, distribution and modification follow. Pay close attention to the difference between a ``work based on the library'' and a ``work that uses the library''. The former contains code derived from the library, whereas the latter must be combined with the library in order to run.
10.11 NO WARRANTY
If you develop a new library, and you want it to be of the greatest possible use to the public, we recommend making it free software that everyone can redistribute and change. You can do so by permitting redistribution under these terms (or, alternatively, under the terms of the ordinary General Public License).
To apply these terms, attach the following notices to the library. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the ``copyright'' line and a pointer to where the full notice is found.
one line to give the library's name and an idea of what it does. Copyright (C) year name of author This library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version. This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. You should have received a copy of the GNU Lesser General Public License along with this library; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
Also add information on how to contact you by electronic and paper mail.
You should also get your employer (if you work as a programmer) or your school, if any, to sign a ``copyright disclaimer'' for the library, if necessary. Here is a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright interest in the library `Frob' (a library for tweaking knobs) written by James Random Hacker. signature of Ty Coon, 1 April 1990 Ty Coon, President of Vice
That's all there is to it!
CC environment variable
CXX environment variable
DBI_TRACE environment variable
CC
CXX
DBI_TRACE
HOME
MYSQL_DEBUG
MYSQL_HISTFILE
MYSQL_HOST
MYSQL_PWD, environment variable, MYSQL_PWD
MYSQL_TCP_PORT, environment variable, MYSQL_TCP_PORT
MYSQL_UNIX_PORT, environment variable, MYSQL_UNIX_PORT
USER
HOME environment variable
my_init()
mysql_affected_rows()
mysql_autocommit().
mysql_bind_param()
mysql_bind_result()
mysql_change_user()
mysql_character_set_name()
mysql_close()
mysql_commit().
mysql_connect()
mysql_create_db()
mysql_data_seek()
MYSQL_DEBUG environment variable
mysql_debug()
mysql_drop_db()
mysql_dump_debug_info()
mysql_eof()
mysql_errno()
mysql_error()
mysql_escape_string()
mysql_execute()
mysql_fetch()
mysql_fetch_field()
mysql_fetch_field_direct()
mysql_fetch_fields()
mysql_fetch_lengths()
mysql_fetch_row()
mysql_field_count(), mysql_field_count()
mysql_field_seek()
mysql_field_tell()
mysql_free_result()
mysql_get_client_info()
mysql_get_host_info()
mysql_get_proto_info()
mysql_get_server_info()
mysql_get_server_version()
MYSQL_HISTFILE environment variable
MYSQL_HOST environment variable
mysql_info()
mysql_init()
mysql_insert_id()
mysql_kill()
mysql_list_dbs()
mysql_list_fields()
mysql_list_processes()
mysql_list_tables()
mysql_more_results().
mysql_next_result().
mysql_num_fields()
mysql_num_rows()
mysql_options()
mysql_param_count()
mysql_ping()
mysql_prepare()
mysql_prepare_result.
MYSQL_PWD environment variable, MYSQL_PWD environment variable
mysql_query(), mysql_query()
mysql_real_connect()
mysql_real_escape_string()
mysql_real_query()
mysql_reload()
mysql_rollback().
mysql_row_seek()
mysql_row_tell()
mysql_select_db()
mysql_send_long_data().
mysql_server_end()
mysql_server_init()
mysql_shutdown()
mysql_sqlstate()
mysql_stat()
mysql_stmt_affected_rows()
mysql_stmt_close()
mysql_stmt_data_seek()
mysql_stmt_errno()
mysql_stmt_error().
mysql_stmt_num_rows()
mysql_stmt_row_seek()
mysql_stmt_row_tell()
mysql_stmt_sqlstate()
mysql_stmt_store_result()
mysql_store_result(), mysql_store_result()
MYSQL_TCP_PORT environment variable, MYSQL_TCP_PORT environment variable
mysql_thread_end()
mysql_thread_id()
mysql_thread_init()
mysql_thread_safe()
MYSQL_UNIX_PORT environment variable, MYSQL_UNIX_PORT environment variable
mysql_use_result()
USER environment variable
ACID
GROUP BY clauses
ORDER BY clauses
AUTO_INCREMENT, and NULL values
batch, mysql option
BDB table type
BDB tables
BerkeleyDB table type
BLOB columns, default values
BLOB columns, indexing
BLOB, inserting binary data
BLOB, size
mysqld server
gcc
cc1plus problems
character-sets-dir, mysql option
mysql
gcc
compress, mysql option
config.cache file
configure script
configure, running after prior invocation
connect_timeout variable
database, mysql option
db table, sorting
DBI interface
DBI Perl module
DBI/DBD
debug-info, mysql option
debug, mysql option
BLOB and TEXT columns
default-character-set, mysql option
mysql.sock
SHOW
enable-named-commands, mysql option
myisamchk output
execute, mysql option
fatal signal 11
config.cache
tmp
force, mysql option
SELECT and WHERE clauses
gcc
GROUP BY, aliases in
GROUP BY, extensions to standard SQL, GROUP BY, extensions to standard SQL
HEAP table type
help, mysql option
host table
host table, sorting
host, mysql option
html, mysql option
ignore-space, mysql option
BLOB columns
IS NULL
LIKE
NULL values
TEXT columns
InnoDB table type
InnoDB tables
ISAM table type
mysqlclient
make_binary_distribution
max_allowed_packet
max_join_size
MERGE table type
msql2mysql
MyISAM table type
myisamchk, myisamchk
myisamchk, example output
myisamchk, options
myisampack, myisampack, myisampack
mysql
mysql command-line options
mysql.sock, protection
mysql_fix_privilege_tables
mysql_install_db
mysql_install_db script
mysqlaccess
mysqladmin, mysqladmin, mysqladmin, mysqladmin, mysqladmin, mysqladmin, mysqladmin
mysqlbinlog, mysqlbinlog
mysqlbug
mysqlbug script
mysqlbug script, location
mysqlclient library
mysqld
mysqld options
mysqld server, buffer sizes
mysqld, starting
mysqld-max
mysqld_multi
mysqld_safe
mysqldump, mysqldump, mysqldump
mysqlimport, mysqlimport, mysqlimport, mysqlimport
mysqlshow
net_buffer_length
mysql.user table
no-auto-rehash, mysql option
no-beep, mysql option
no-named-commands, mysql option
no-pager, mysql option
no-tee, mysql option
NULL values, and indexes
NULL values, vs. empty values
NULL, testing for null, NULL, testing for null, NULL, testing for null, NULL, testing for null
NULL values, and AUTO_INCREMENT columns
NULL values, and TIMESTAMP columns
one-database, mysql option
Open Source, defined
mysql
myisamchk
ORDER BY, aliases in
pack_isam
pager, mysql option
password, mysql option
port, mysql option
DATE columns
prompt command
prompt, mysql option
protocol, mysql option, protocol, mysql option
quick, mysql option
raw, mysql option
reconnect, mysql option
replace
configure after prior invocation
safe-mode command
safe-updates, mysql option
safe_mysqld
mysql_install_db
mysqlbug
SELECT, Query Cache
select_limit
set-variable, mysql option
silent, mysql option
skip-column-names, mysql option
skip-line-numbers, mysql option
socket, mysql option
sql_yacc.cc problems
mysqld
mysql
status command
SELECTs
table, mysql option
BDB
Berkeley DB
HEAP
host
tee, mysql option
TEXT columns, default values
TEXT columns, indexing
TEXT, size
connect_timeout variable
TIMESTAMP, and NULL values
unbuffered, mysql option
user table, sorting
user, mysql option
VARCHAR, size
mysqld
verbose, mysql option
version, mysql option
vertical, mysql option
wait, mysql option
LIKE
mysql.columns_priv table
mysql.db table
mysql.host table
mysql.tables_priv table
mysql.user table
xml, mysql option
This document was generated on 6 June 2003 using the texi2html translator version 1.52 (extended by davida@detron.se).