Skip to content

Update field mappings in IDP Elastic cluster

On IDP Elastic cluster, we use strict mapping of fields in index, which requires explicit definition of format of messages. This means that any message coming to cluster that doesn't match the mapping is denied / dropped.

Removing already existing mapping is not possible without migrating to new mapping and moving all the data with it. Adding new fields is possible without issues.

This guide applies on logs / messages, which doesn't use ECS (Elastic Common Schema) format, and we maintain the mapping manually. That is the case for IDP cluster.

IDP cluster mapping definitions

Since Elastic 8, we manage mappings via Terraform. We hardcoded the mapping JSON inside Terraform module - reason behind is that we want to make sure that every environment of IDP cluster uses versioned mapping. When need for change of mapping arise, we release new version of module. New version can be then safely deployed on each environment independently to catch bugs in mapping.

Links to repositories:

Note that not all applications sends data to Elasticsearch directly, but through Kafka topic in Confluent. Connector in Confluent Kafka cluster then handles replicating the messages in topic to the Elastic cluster.

trader-* clusters has Connectors defined for IDP cluster - you can find them in Confluent Console

Releasing new mapping

  1. Update the desired mapping (JSON files elasticstack/mappings)
  2. git tag to release new version. Consult git tag output for latest version.
  3. Pipeline will release new Terraform module, which is stored with the project. You can check it in Terraform modules

Applying new mapping on Elastic Cluster

After release, you need to apply the new version in idp-infra Terraform repository.

  1. Select directory with environment where you want to apply change (dev, dev2, stage, prod)
  2. Update the module version to new release.
  3. Pipeline will run Terraform plan. Run Terraform apply after checking the plan, if all look as expected.

Updating existing indices with new mapping

Previos step updates only the template for index. That means the change will be effective only on newly created indexes after rollover period. Rollover is controlled by Index Lifecycle Policy. Rollover can happen based on age of index or size of index.

When you want to apply changes to existing index and don't wait for rollover, follow Elastic docs

Here is detailed example using the Elastic docs.

  1. Login to Kibana of IDP cluster https://ftmo-idp.kb.europe-west3.gcp.cloud.es.io:9243 via Google SSO.
  2. Find Data Stream from Kibana menu - Management -> Stack Management -> Index Management -> Data Streams and copy the name.
  3. Open Dev Tools console from Kibana - Management -> Dev Tools
  4. Put the Data stream name (it will apply to all backing indexes) in request to API like this. Request payload is properties field of JSON mapping you have updated in Terraform module. Copy the whole properties block from JSON
PUT /logs-trader_dev-ftmo-trading-audit-dev/_mapping
{
 "properties": {
     "@timestamp": {
         "type": "date_nanos"
     },
     "auditedEventType": {
         "doc_values": true,
         "eager_global_ordinals": false,
         "index": true,
         "index_options": "docs",
         "norms": false,
         "split_queries_on_whitespace": false,
         "store": false,
         "type": "keyword"
     },
     ...
}
  1. Check if the mapping is applied on index in data stream. Find the Data stream (as in previous step) and click on number in Indices column. It will redirect you to indices linked to this data stream
  2. Click on Index name, select Mappings from menu - you will see the properties field in the JSON object.