![]() Prometheus's local storage is limited to a single node's scalability and durability. Blocks must be fully expired before they are removed. It may take up to two hours to remove expired blocks. If both time and size retention policies are specified, whichever triggers firstĮxpired block cleanup happens in the background. ![]() It is strongly recommended to use a local filesystem for reliability. NFS could be POSIX-compliant, but most implementations are not. NFS filesystems (including AWS's EFS) are not supported. Offer extended retention and data durability.ĬAUTION: Non-POSIX compliant filesystems are not supported for Prometheus' local storage as unrecoverable corruptions may happen. Storage is not intended to be durable long-term storage external solutions Note that this means losingĪpproximately two hours data per block directory. Or the WAL directory to resolve the problem. You can also try removing individual block directories, Strategy to address the problem is to shut down Prometheus then remove theĮntire storage directory. If your local storage becomes corrupted for whatever reason, the best However, reducing the number of series is likely more effective, due to compression of samples within a series. To lower the rate of ingested samples, you can either reduce the number of time series you scrape (fewer targets or fewer series per target), or you can increase the scrape interval. Thus, to plan the capacity of a Prometheus server, you can use the rough formula: needed_disk_space = retention_time_seconds * ingested_samples_per_second * bytes_per_sample Prometheus stores an average of only 1-2 bytes per sample. ![]() Note that once enabled, downgrading Prometheus to a version below 2.11.0 will require deleting the WAL. This flag was introduced in 2.11.0 and enabled by default in 2.20.0. Depending on your data, you can expect the WAL size to be halved with little extra cpu load. -compression: Enables compression of the write-ahead log (WAL).So the minimum requirement for the disk is the peak space taken by the wal (the WAL and Checkpoint) and chunks_head (m-mapped Head chunks) directory combined (peaks every 2 hours). Only the persistent blocks are deleted to honor this retention although WAL and m-mapped chunks are counted in the total size. Units supported: B, KB, MB, GB, TB, PB, EB. : The maximum number of bytes of storage blocks to retain.Overrides if this flag is set to anything other than default. : Where Prometheus writes its database.Prometheus has several flags that configure local storage. The initial two-hour blocks are eventually compacted into longer blocks in the background.Ĭompaction will create larger blocks containing data spanning up to 10% of the retention time, or 31 days, whichever is smaller. Careful evaluation is required for these systems as they vary greatly in durability, performance, and efficiency.įor further details on file format, see TSDB format. The use of RAID is suggested for storage availability, and snapshotsĪrchitecture, it is possible to retain years of data in local storage.Īlternatively, external storage may be used via the remote read/write APIs. Thus, it is not arbitrarily scalable or durable in the face ofĭrive or node outages and should be managed like any other single nodeĭatabase. Note that a limitation of local storage is that it is not clustered or High-traffic servers may retain more than three WAL files in order to keep atĪ Prometheus server's data directory looks something like this. Prometheus will retain a minimum of three write-ahead log files. Has not yet been compacted thus they are significantly larger than regular blockįiles. Replayed when the Prometheus server restarts. ![]() It is secured against crashes by a write-ahead log (WAL) that can be ![]() The current block for incoming samples is kept in memory and is not fully Of deleting the data immediately from the chunk segments). When series areĭeleted via the API, deletion records are stored in separate tombstone files (instead The samples in the chunks directoryĪre grouped together into one or more segment files of up to 512MB each by default. Of a directory containing a chunks subdirectory containing all the time series samplesįor that window of time, a metadata file, and an index file (which indexes metric namesĪnd labels to time series in the chunks directory). Ingested samples are grouped into blocks of two hours. Prometheus's local time series database stores data in a custom, highly efficient format on local storage. Prometheus includes a local on-disk time series database, but also optionally integrates with remote storage systems. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |