Just finished checking in initial support for batching in 5.2. Even though MySQL doesn't support batching in the protocol, the connector sends updates and deletes in multi-statement fashion (read: sent at the same time but separated by a semi-colon). For inserts the connector tries to be a little smarter. If the insert command is simple and uses a simple VALUES(...) block, then it will construct a multi-value INSERT statement of the form INSERT INTO <table> (col, col, ..) VALUES (...), (...), (...). This is significantly faster for the server than sending the inserts in multi-statement mode.
For those not familiar with .NET, we have an object called a DataTable. This object contains columns and rows and is what the user uses to insert, update, and delete data on the client side. When the user wants to update the data on the server, this table (or a series of these tables) is passed to the MySqlDataAdapter.Update method. A data adapter object holds references to at most 4 different commands, one each for select, update, insert, and delete.
For each changed row in a data table, the data adapter will choose the appropriate command and ask that command to execute giving it the values in that data row as parameters. A user can set the data adapter's UpdateBatchSize to tell the system how many commands should be sent at one time in a "batch". Setting it to 0 tells it to send as many as the server will allow.
Batching saves tons of time when you have many rows of the same operation in a data table. For one it saves on network round trips. To get an idea of how much faster batching is I ran a simple test. I inserted 100,000 rows into a data table and then setup a data adapter to do the insert to a database running on Ubuntu in a virtual pc I was running (just so the network round trip wasn't just to localhost).
Without batching: 1 minute 47 seconds
With batching: 4.5 seconds.