How can I Insert many rows into a MySQL table and return the new IDs?

Normally I can insert a row into a MySQL table and get the last_insert_id back. Now, though, I want to bulk insert many rows into the table and get back an array of IDs. Does anyone know how I can do this?

There are some similar questions, but they are not exactly the same. I don’t want to insert the new ID to any temporary table; I just want to get back the array of IDs.

Can I retrieve the lastInsertId from a bulk insert?

Mysql mulitple row insert-select statement with last_insert_id()


Thank you for visiting the Q&A section on Magenaut. Please note that all the answers may not help you solve the issue immediately. So please treat them as advisements. If you found the post helpful (or not), leave a comment & I’ll get back to you as soon as possible.

Method 1

Old thread but just looked into this, so here goes: if you are using InnoDB on a recent version of MySQL, you can get the list of IDs using LAST_INSERT_ID() and ROW_COUNT().

InnoDB guarantees sequential numbers for AUTO INCREMENT when doing bulk inserts, provided innodb_autoinc_lock_mode is set to 0 (traditional) or 1 (consecutive).
Consequently you can get the first ID from LAST_INSERT_ID() and the last by adding ROW_COUNT()-1.

Method 2

The only way I can think it could be done is if you store a unique identifier for each set of rows inserted (guid)
then select the row ids.

(SELECT col1,col2,col3,'3aee88e2-a981-1027-a396-84f02afe7c70' FROM a_very_large_table);

WHERE guid='3aee88e2-a981-1027-a396-84f02afe7c70';

You could also generate the guid in the database by using uuid()

Method 3

Lets assume we have a table called temptable with two cols uid, col1 where uid is an auto increment field. Doing something like below will return all the inserted id’s in the resultset. You can loop through the resultset and get your id’s. I realize that this is an old post and this solution might not work for every case. But for others it might and that’s why I’m replying to it.

# lock the table
lock tables temptable write;

#bulk insert the rows;
insert into temptable(col1) values(1),(2),(3),(4);

#get the value of first inserted row. when bulk inserting last_insert_id() #should give the value of first inserted row from bulk op.
set @first_id = last_insert_id();

#now select the auto increment field whose value is greater than equal to #the first row. Remember since you have write lock on that table other #sessions can't write to it. This resultset should have all the inserted #id's
select uid from temptable where uid ><a href="" class="__cf_email__" data-cfemail="d8e598beb1aaabac">[email protected]</a>_id;

#now that you are done don't forget to unlock the table.
unlock tables;

Method 4

This thread is old but all these solutions did not help me so I came up with my own.

First, count how many rows you want to insert

let’s say we need to add 5 rows:



then use the auto_increment just selected to do next query:


Finally do your inserts

Use the reserved autoincrement range to insert with id.

Warning: this solution requires elevated access level to the tables. But usually bulk inserts are run by crons and importer scripts and what not that may use special access anyway. You would not use this for just a few inserts.

This may leave unused id’s if you use ON DUPLICATE KEY UPDATE.

Method 5

It’s worth noting that @Dag Sondre Hansen’s answer can also be implemented in case you have innodb_autoinc_lock_mode set to 2 by simply locking the table before insert.

INSERT INTO my_table (col_a, col_b, col_c) VALUES (1,2,3), (4,5,6), (7,8,9);
SET @row_count = ROW_COUNT();
SET @last_insert_id = LAST_INSERT_ID();
SELECT id FROM my_table WHERE id >= @last_insert_id AND id <= @last_insert_id + (@row_count - 1);

Here’s a fiddle demonstrating:

Method 6

I think you will have to either handle the transaction id in your application, or the item id in your application in order to do this flawlessly.

One way to do this which could work, assuming that all your inserts succeed (!), is the following :

You can then get the inserted id’s with a loop for the number of affected rows, starting with lastid (which is the first inserted id of the bulk insert).
And thus, i checked it works perfectly .. just be careful that HeidiSQL for example will not return the correct value for ROW_COUNT(), probably because it’s a crappy GUI doing random shit we don’t ask it – however it’s perfectly correct from either command line or PHP mysqli –

INSERT into test (b) VALUES ('1'),('2'),('3');

In PHP it looks like this (local_sqle is a straight call to mysqli_query, local_sqlec is a call to mysqli_query + convert resultset to PHP array) :

INSERT into test (b) VALUES ('1'),('2'),('3');");
$r=local_sqlec("SELECT LAST_INSERT_ID() AS lastid,ROW_COUNT() AS rowcount;");
echo "last id =".($r[0]['lastid'])."<br>";
echo "Row count =".($r[0]['rowcount'])."<br>";

    echo "inserted id =".($r[0]['lastid']+$i)."<br>";

The reason the queries are separated is because I wouldn’t otherwise get my result using my own functions, if you do this with standard functions, you can put it back in one statement and then retrieve the result you need (it should be result number 2 – assuming you use an extension which handles more than one result set / query).

Method 7

I wouldn’t be sure that auto increment value will increase item by 1. and there will be huge problems if your DB will have Master // Master replication and to resolve auto_increment duplicate exclusion. AI will be +2 instead of +1, also if there will be one more master it will come to +3. so relay on thing like AUTO_INCREMENT is going up for 1 is killing your project.

I see only some good options to do that.

this SQL snippet will have no problems with multiple masters and give good results until you will need only inserted records. on multiple requests without transactions can catch other inserts records.

SELECT max(id) into @maxLastId FROM `main_table`;
INSERT INTO `main_table` (`value`) VALUES ('first'), ('second') ON DUPLICATE KEY UPDATE `value` = VALUES(`value`);
SELECT `id` FROM `main_table` WHERE id > @maxLastId OR @maxLastId IS NULL;

(if you will need also updated records by DUPLICATE KEY UPDATE) you will need to refactor database a bit and SQL will look like next, (safe for transactions and no transactions inside one connection.)

INSERT INTO bulk_inserts VALUES (null);
SET @blukTransactionId = LAST_INSERT_ID();
SELECT  @blukTransactionId, LAST_INSERT_ID();
INSERT INTO `main_table` (`value`, `transaction_id`) VALUES ('first', @blukTransactionId), ('second', @blukTransactionId) ON DUPLICATE KEY UPDATE `value` = VALUES(`value`), `transaction_id` = VALUES(`transaction_id`);
SELECT  @blukTransactionId, LAST_INSERT_ID();
SELECT id FROM `main_table` WHERE `transaction_id` = @blukTransactionId;

both cases are safe to transnational. first will show you only inserted records and second will give you all records even updated.

also those options will work even with INSERT IGNORE …

Method 8

For anyone using java with JDBC, it is possible. I am getting ids back with batch-insert doing it like this:

PreparedStatement insertBatch = null;
Connection connection = ....;

for (Event event : events) {
        if (insertBatch == null){
            insertBatch = connection.prepareStatement("insert into `event` (game, `type`, actor, target, arg1, arg2, arg3, created) " +
                "values (?, ?, ?, ?, ?, ?, ?, ?)", Statement.RETURN_GENERATED_KEYS);
        insertBatch.setString(2, event.type);
        insertBatch.setString(5, event.arg1);
        insertBatch.setObject(6, event.arg2);
        insertBatch.setObject(7, event.arg3);
        insertBatch.setTimestamp(8, new Timestamp(event.created.getTime()));

if (insertBatch != null){
    ResultSet generatedKeys = insertBatch.getGeneratedKeys();
    for (Event event : events) {

        if ( generatedKeys == null || !{
            logger.warn("Unable to retrieve all generated keys");
        } = generatedKeys.getLong(1);
    logger.debug("events inserted");

Source: “Using MySQL I can do it with JDBC this way:” – Plap –

I have to actually add this to my JDBC url: rewriteBatchedStatements=true. Or else the actual inserts show up in the mysql “general query log” as separate rows. With 7000 rows inserted, I got 2m11s for regular inserts, 46s without rewrite.. on and 1.1s with rewrite.. on. Also, it does not make other people’s inserts block (I tested that). When I inserted 200k rows, it grouped them into about 36k per line ie insert into abc(..) values(..),(..),(..)....

I am actually using JDBCTemplate so the way to access the PreparedStatement is:

ArrayList<Long> generatedIds = (ArrayList<Long>) jdbcTemplate.execute(
    new PreparedStatementCreator() {
        public PreparedStatement createPreparedStatement(Connection connection) throws SQLException {
            return connection.prepareStatement(insertSql, Statement.RETURN_GENERATED_KEYS);
    new PreparedStatementCallback<Object>() {
        public Object doInPreparedStatement(PreparedStatement ps) throws SQLException, DataAccessException {
            // see above answer for setting the row data

            ResultSet resultSet = ps.getGeneratedKeys();
            ArrayList<Long> ids = new ArrayList<>();
            while ( {
            return ids;

Method 9

This can be solved in another way if you know the number of rows that you inserted.
For so for instance, you inserted 4 rows, the last id on your table was 600.

So right after performing the insert query, you can use

select last_insert_id()

to get the first insert id,
now you can do this simple calculation

last_insert_id() value which would be 601 here
So, it would be
601 + 4 - 1 = 604

So, all the ids between 601 and 604

InnoDB needed. Make sure to put all queries in a transaction.

Solution 2
Only works if you have varchar or some sort of text field
Add a random string to the varchar or text field.
For example, random string is 123sdfsklls113 you have name field whose value is Sam, so the field in this case would become


Append the random string to the end of column value.

After you insert the rows, now you can easily get the last inserted rows using this

select id from table_name where column_name like '%randomstring_here'

Then after you get the ids, you can update the column values easily like this

update table_name set column_name = replace(column_name, 'randomstring_ here', '') where id in (last inserted ids here)

it will remove the random string from the column value.

Method 10

$idArray = array();
foreach($array as $key) {
 array_push($idArray, mysql_insert_id());

All methods was sourced from or, is licensed under cc by-sa 2.5, cc by-sa 3.0 and cc by-sa 4.0

0 0 votes
Article Rating
Notify of

Inline Feedbacks
View all comments
Would love your thoughts, please comment.x