npm install elasticdump -g
# Copy an index from production to staging with analyzer and mapping:
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=http://staging.es.com:9200/my_index \
--type=analyzer
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=http://staging.es.com:9200/my_index \
--type=mapping
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=http://staging.es.com:9200/my_index \
--type=data
# Backup index data to a file:
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=/data/my_index_mapping.json \
--type=mapping
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=/data/my_index.json \
--type=data
# Backup and index to a gzip using stdout:
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=$ \
| gzip > /data/my_index.json.gz
# Backup the results of a query to a file
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=query.json \
--searchBody="{\"query\":{\"term\":{\"username\": \"admin\"}}}"
# Specify searchBody from a file
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=query.json \
--searchBody=@/data/searchbody.json
# Copy a single shard data:
elasticdump \
--input=http://es.com:9200/api \
--output=http://es.com:9200/api2 \
--input-params="{\"preference\":\"_shards:0\"}"
# Backup aliases to a file
elasticdump \
--input=http://es.com:9200/index-name/alias-filter \
--output=alias.json \
--type=alias
# Import aliases into ES
elasticdump \
--input=./alias.json \
--output=http://es.com:9200 \
--type=alias
# Backup templates to a file
elasticdump \
--input=http://es.com:9200/template-filter \
--output=templates.json \
--type=template
# Import templates into ES
elasticdump \
--input=./templates.json \
--output=http://es.com:9200 \
--type=template
# Split files into multiple parts
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=/data/my_index.json \
--fileSize=10mb
# Import data from S3 into ES (using s3urls)
elasticdump \
--s3AccessKeyId "NULL" \
--s3SecretAccessKey "NULL" \
--input "s3://NULL/NULL.json" \
--output=http://production.es.com:9200/my_index
# Export ES data to S3 (using s3urls)
elasticdump \
--s3AccessKeyId "NULL" \
--s3SecretAccessKey "NULL" \
--input=http://production.es.com:9200/my_index \
--output "s3://NULL/NULL.json"
# Import data from MINIO (s3 compatible) into ES (using s3urls)
elasticdump \
--s3AccessKeyId "NULL" \
--s3SecretAccessKey "NULL" \
--input "s3://NULL/NULL.json" \
--output=http://production.es.com:9200/my_index
--s3ForcePathStyle true
--s3Endpoint https://production.minio.co
# Export ES data to MINIO (s3 compatible) (using s3urls)
elasticdump \
--s3AccessKeyId "NULL" \
--s3SecretAccessKey "NULL" \
--input=http://production.es.com:9200/my_index \
--output "s3://NULL/NULL.json"
--s3ForcePathStyle true
--s3Endpoint https://production.minio.co
# Import data from CSV file into ES (using csvurls)
elasticdump \
# csv:// prefix must be included to allow parsing of csv files
# --input "csv://NULL.csv" \
--input "csv:///data/cars.csv"
--output=http://production.es.com:9200/my_index \
--csvSkipRows 1 # used to skip parsed rows (this does not include the headers row)
--csvDelimiter ";" # default csvDelimiter is ','
Notes
- This tool is likely to require Elasticsearch version 1.0.0 or higher
- Elasticdump (and Elasticsearch in general) will create indices if they don't exist upon import
- When exporting from elasticsearch, you can export an entire index (
--input="http://localhost:9200/index"
) or a type of object from that index (--input="http://localhost:9200/index/type"
). This requires ElasticSearch 1.2.0 or higher - If the path to our elasticsearch installation is in a sub-directory, the index and type must be provided with a separate argument (
--input="http://localhost:9200/sub/directory --input-index=index/type"
).Using--input-index=/
will include all indices and types. - We can use the
put
method to write objects. This means new objects will be created and old objects with the same ID be updated - The
file
transport will not overwrite any existing files by default, it will throw an exception if the file already exists. You can make use of--overwrite
instead. - If you need basic http auth, you can use it like this:
--input=http://name:[email protected]:9200/my_index
- If you choose a stdio output (
--output=$
), you can also request a more human-readable output with--format=human
- If you choose a stdio output (
--output=$
), all logging output will be suppressed - If you are using Elasticsearch version 6.0.0 or higher the
offset
parameter is no longer allowed in the scrollContext - ES 6.x.x & higher no longer support the
template
property for_template
. All templates prior to ES 6.0 has to be upgraded to useindex_patterns
- ES 7.x.x & higher no longer supports
type
property. All templates prior to ES 6.0 has to be upgraded to remove the type property - ES 5.x.x ignores offset (from) parameter in the search body. All records will be returned
- ES 6.x.x from parameter can no longer be used in the search request body when initiating a scroll
- Index templates has been deprecated and will be replaced by the composable templates introduced in Elasticsearch 7.8.
- Ensure JSON in the searchBody properly escaped to avoid parsing issues : https://www.freeformatter.com/json-escape.html
- Dropped support for Node.JS 8 in Elasticdump v6.32.0. Node.JS 10+ is now required.
- Elasticdump v6.42.0 added support for CSV import/export using the fast-csv library
- Elasticdump v6.68.0 added support for specifying a file containing the searchBody
- Elasticdump v6.85.0 added support for ignoring auto columns in CSV
- Elasticdump v6.86.0 added support for searchBodyTemplate which allows the searchBody to be transformed
source: https://github.com/elasticsearch-dump/elasticsearch-dump
Article Categories:
config