MySQL Tutorials - Herong's Tutorial Examples - v4.43, by Herong Yang
InnoDB Table with Long Rows
This section provides a tutorial example showing index pages and data pages for an InnoDB table with long rows, about 1812 bytes.
In previous tutorials, the "Head" table rows are relatively short, only 216 bytes. In this tutorial, I want to repeat some tests with longer rows.
1. Create an InnoDB table with long rows, targeting 8 rows per page.
mysql> CREATE TABLE Body (id INT, fid INT, -> b1 CHAR(255), b2 CHAR(255), b3 CHAR(255), b4 CHAR(255), -> b5 CHAR(255), b6 CHAR(255), b7 CHAR(255), -> PRIMARY KEY (ID)) ENGINE=InnoDB DEFAULT CHARSET=latin1;
2. Prepare two stored procedures to insert rows to the table in transaction chunks. Since the buffer size is about 8192 pages, a chunk size of 10000 rows (10000/8 = 1250 pages) seems to be ok for the buffer to handle.
mysql> DELIMITER '/'; mysql> CREATE PROCEDURE InsertBody(IN X INT, IN Y INT) -> BEGIN -> DECLARE I INT; -> SET I = X; -> WHILE I <= Y DO -> INSERT INTO Body VALUES -> (I, I, 'B1', 'B2', 'B3', 'B4', 'B5', 'B6', 'B7'); -> SET I = I + 1; -> END WHILE; -> END/ mysql> CREATE PROCEDURE LargeBody() -> BEGIN -> DECLARE S INT; -> DECLARE I INT; -> SET S = 10000; -> SET I = 0; -> WHILE I < 200 DO -> SELECT 'Inserting', I*S+1, (I+1)*S; -> CALL InsertBody(I*S+1, (I+1)*S); -> commit; -> SET I = I + 1; -> END WHILE; -> END/ mysql> DELIMITER ';'/
3. Add 1,000,000 rows with stored procedures. Since each row is about 2 KB, the total data size is about 2 GB.
mysql> set sql_log_bin=OFF; mysql> set autocommit=OFF; mysql> CALL LargeBody(); | Inserting | 1 | 10000 | -- 1 row in set (0.00 sec) | Inserting | 10001 | 20000 | -- 1 row in set (1.62 sec) | Inserting | 20001 | 30000 | -- 1 row in set (2.77 sec) | Inserting | 30001 | 40000 | -- 1 row in set (3.76 sec) | Inserting | 40001 | 50000 | -- 1 row in set (4.83 sec) | Inserting | 50001 | 60000 | -- 1 row in set (22.24 sec) | Inserting | 60001 | 70000 | -- 1 row in set (43.40 sec) | Inserting | 70001 | 80000 | -- 1 row in set (1 min 5.74 sec) ... | Inserting | 980001 | 990000 | -- 1 row in set (35 min 24.92 sec) | Inserting | 990001 | 1000000 | -- 1 row in set (35 min 47.70 sec)
The speed of insert is pretty stable, about 25 sec per 1000
3. Look buffered pages for the table.
mysql> select * from information_schema.innodb_buffer_page -> where table_name rlike 'Body' limit 3; | page_number | page_type | number_records | data_size | +-------------+-----------+----------------+-----------+ | 87 | INDEX | 8 | 14496 | | 20 | INDEX | 8 | 14496 | | 81 | INDEX | 8 | 14496 | mysql> select count(*) from information_schema.innodb_buffer_page -> where table_name rlike 'Body' and data_size/number_records=14496/8; | 5211 | mysql> select * from information_schema.innodb_buffer_page -> where table_name rlike 'Body' and data_size/number_records<>14496/8; +-------------+-----------+----------------+-----------+ | page_number | page_type | number_records | data_size | +-------------+-----------+----------------+-----------+ | 36176 | INDEX | 1050 | 14700 | | 99796 | INDEX | 1050 | 14700 | ... | 36204 | INDEX | 1050 | 14700 | | 4 | INDEX | 120 | 1680 | | 39 | INDEX | 120 | 1680 | | 99801 | INDEX | 50 | 700 | +-------------+-----------+----------------+-----------+ 27 rows in set (0.02 sec)
What I see in the output:
4. Create a procedure to calculate the number of pages read from the buffer.
mysql> DELIMITER '/'; mysql> create procedure GetReads() -> begin -> select variable_value-@last, variable_value -> from performance_schema.session_status -> where VARIABLE_NAME rlike 'Innodb_buffer_pool_read_requests' -> into @read, @last; -> select @read, @last; -> end/ mysql> DELIMITER ';'/
5. Verify the number of pages to read for retrieving a row with a given primary key. I see 4 pages read from the buffer when retrieving row # 100000, less than what I expected.
mysql> set @last = 0; mysql> call GetReads(); | @read | @last | +----------+----------+ | 69411186 | 69411186 | mysql> call GetReads(); | @read | @last | +-------+----------+ | 0 | 69411186 | mysql> select * from Body where id=100000; | id | fid | b1 | b2 | b3 | b4 | b5 | b6 | b7 | +--------+--------+------+------+------+------+------+------+------+ | 100000 | 100000 | B1 | B2 | B3 | B4 | B5 | B6 | B7 | 1 row in set (0.00 sec) mysql> call GetReads(); | @read | @last | +-------+----------+ | 4 | 69411190 |
5. Verify the number of pages to read for running "SELECT COUNT(*)" statement. I see 246,846 pages read from the buffer. Again, it look like InnoDB did a full table scan twice, since there is only 125,000 data pages for the table.
mysql> call GetReads(); | @read | @last | +-------+----------+ | 0 | 69411615 | mysql> select count(*) from Body; | 1000000 | 1 row in set (23.72 sec) mysql> call GetReads(); | @read | @last | +--------+----------+ | 246846 | 69658461 |
Table of Contents
MySQL Introduction and Installation
Introduction of MySQL Programs
Perl Programs and MySQL Servers
Java Programs and MySQL Servers
Character Strings and Bit Strings
Table Column Types for Different Types of Values
Using DDL to Create Tables and Indexes
Using DML to Insert, Update and Delete Records
Using SELECT to Query Database
Window Functions for Statistical Analysis
Use Index for Better Performance
Transaction Management and Isolation Levels
Defining and Calling Stored Procedures
Variables, Loops and Cursors Used in Stored Procedures
System, User-Defined and Stored Procedure Variables
Storage Engines in MySQL Server
►InnoDB Storage Engine - Primary and Secondary Indexes
Primary Key Index on InnoDB Table
InnoDB Primary Key Index on Large Table
"SELECT COUNT(*)" on InnoDB Table
Secondary Index on InnoDB Table
Performance of Index Range on InnoDB Table
Workaround on InnoDB "SELECT COUNT(*)" Problem
Performance Tuning and Optimization
Installing MySQL Server on Linux