
Occasionally, as Elasticsearch administrators we may encounter a situation where all indices are automatically set to read_only_allow_delete=true
, preventing write operations. Usually, this occurs when the cluster runs out of available disk space. Let’s discuss why this happens, how to resolve it, and how to prevent it in the future.
So, why do indices become read_only_allow_delete=true
? Elasticsearch includes built-in mechanisms to prevent nodes from running out of disk space. When a node reaches specific thresholds of disk usage, Elasticsearch automatically applies a read-only block to protect the cluster’s stability.
Here’s how it works:
- Thresholds:
- Low watermark: Elasticsearch warns that disk space is running low.
- High watermark: New shards are not allocated to nodes with insufficient disk space.
- Flood stage: Indices with shards on the affected node are set to
read_only_allow_delete=true
to prevent further writes.
- Even after clearing up disk space, the
read_only_allow_delete
setting is not automatically removed. Administrators must reset it manually.
How to Fix the Issue
1. Remove the read_only_allow_delete
Block
To remove the block for all indices, use the following API request:
PUT _all/_settings
{
"index.blocks.read_only_allow_delete": false
}
If you need to apply this change to a specific index, replace _all
with the name of the index.
2. Adjust Disk Watermark Settings
To prevent future issues, review and adjust your cluster’s disk watermark thresholds (disk.watermark.low
, disk.watermark.high
, disk.watermark.flood_stage
) based on your requirements.
Use percentage-based or byte-based values consistently, as Elasticsearch doesn’t allow mixing these formats. Example settings:
PUT _cluster/settings
{
"persistent": {
"cluster.routing.allocation.disk.watermark.low": "85%",
"cluster.routing.allocation.disk.watermark.high": "90%",
"cluster.routing.allocation.disk.watermark.flood_stage": "95%"
}
}
It is important to note that this is not a bug but an expected behavior of Elasticsearch. It is a protective mechanism designed to maintain cluster stability and prevent data loss when disk space is critically low. If your indices are marked as “read-only” due to disk space issues, it’s a signal to review your disk usage and thresholds. Clear the read_only_allow_delete
setting, check if you have enough disk space, and validate your watermark configurations to avoid similar incidents in the future.
The post Addressing read_only_allow_delete After Disk Space Issues appeared first on SOC Prime.