Hi Everyone!

We are ingesting documents into a new elasticsearch index using 40 separate processes.

We're experiencing an issue which seems to only occur during the first few requests.  We receive the following remote error from a bulk insert request:

```
{
  "type"=>"process_cluster_event_timeout_exception",
  "reason"=>"failed to process cluster event (put-mapping) within 30s"
}
```

We subsequently recover by retrying the batch insert but we would like to understand what's going on and ideally avoid the problem entirely.

At this stage, there are only two indexes (the old index and the new) with 3 shards in each index, the eventual size is around 6Gb with around 20 million documents.

Each of the 40 processes attempt to create the index and then insert and the index creation uses a dynamic template we have configured in the cluster.

I'm assuming the problem is that there are multiple processes triggering the index creation simultaneously and the index has not fully initialized before we immediately attempt to bulk insert documents.  Does this sound feasible?

Does anyone happen to know how the problem might be avoided?

---