If you scan your log entries and see "Failed to index node", this indicates that Elastic Search failed to update its index for a node. Cloud CMS tells Elastic Search to updates its index whenever a node is created or updated. When a node is deleted, Cloud CMS tells Elastic Search to remove the node from its index.
This error message means that Elastic Search failed to update the index and therefore the index wasn't updated.
If you open up the error, you may see further information that indicates things like:
- failed to parse [author]
- unknown property [name]
Or similar things to suggest that parts of the incoming JSON did not make sense given the already understood field mappings that Elastic Search had for this index.
In the case of the above, these would suggest that Elastic Search thinks there should be an "author" object field with a "name" sub-field and it isn't being found. In the 3.1 release of Cloud CMS, when this occurs, the document fails to index.
The Elastic Search architecture is one in which all content indexed is processed through a common field mapping. As such, it is possible for two different content content types to define "author" differently and have those collide upon indexing. For example, you might have Content Type A define "author" as a string. And you might have Content Type B define "author" as an object with a nested "name" sub-property.
In this case, when your content gets indexed, a content instance of Type A will index fine (with "author" interpreted to be a string). But when a content instance of Type B tries to index, Elastic Search will raise mapping errors because "author" isn't an object (in its understanding). You can get errors similar to the one above.
Elastic Search has recognized this shortcoming in its product and has addressed it in their roadmap. If you want to get down to the nitty-gritty, you can read about it here:
https://www.elastic.co/guide/en/elasticsearch/reference/current/removal-of-types.html
They explain there why they are adjusting their architecture to avoid this kind of conflict.
From the Cloud CMS perspective, what we've done is prepare the upcoming 3.2 release to use Elastic Search 6.2.x (their latest release). We've adjusted the way we use mappings as well to try to leverage some of these improvements. Among the improvements at the moment (in 3.2) is that we no longer fail to index documents with field mapping collisions - instead, the fields are just ignored but the documents at least index the best that they can.
Until 3.2 is available, I would recommend:
- See if you can work out which two content types may be colliding on the "author" field
- You may then be able to adjust your schema to avoid the field collision within Elastic Search
- Either way, you may opt to rebuild your indexes (Project Settings > Tools > Rebuild Search Index)
The third step will tell Elastic Search to rebuild the index and rebuild the field mappings in turn. This gives it a fresh start at re-interpreting your content and getting the mappings right.