LO2: Microservice Dataset of Logs and Metrics
No Thumbnail Available
Restricted Availability
Date
2025-02-28, 2025-02-28
Persistent identifier of the Data Catalogue metadata
Creator/contributor
Editor
Journal title
Journal volume
Publisher
Publication Type
dataset
dataset
dataset
Peer Review Status
Repositories
Access rights
Open
ISBN
ISSN
Description
LO2 dataset
This is the data repository for the LO2 dataset.
Here is an overview of the contents.
lo2-data.zip
This is the main dataset. This is the completely unedited output of our data collection process. Note that the uncompressed size is around 540 GB. For more information, see the paper and the data-appendix in this repository.
lo2-sample.zip
This is a sample that contains the data used for preliminary analysis. It contains only service logs and the most relevant metrics for the first 100 runs. Furthermore, the metrics are combined on a run level to a single csv to make them easier to utilize.
data-appendix.pdf
This document contains further details and stats about the full dataset. These include file size distributions, empty file analysis, log type analysis and the appearance of an unknown file.
lo2-scripts.zip
Various scripts for processing the data to create the sample, to conduct the preliminary analysis and to create the statistics seen in the data-appendix.
csv_generator.py, csv_merge*.py: These scripts create and combine the metrics into csv files. They need to be run in order. Merging runs to global is very memory intensive.
findempty.py: Finds empty files in the folders. As some are expected to be empty, it also counts the unexpected ones. Used in data-appendix.
loglead_lo2.py: Script for the preliminary analysis of the logs for error detection. Requires LogLead version 1.2.1.
logstats.py: Counts log lines and their type. Used for creating the figure of number of lines per type and service.
node_exporter_metrics.txt: Metric descriptions exported from Prometheus (text file).
pca.py: The Principal Component Analysis script used for preliminary analysis.
reduce_logs.py: Very important for fair analysis as in the beginning of the files there are some initialization rows that leak information regarding correctness.
requirements.txt: Required Python libraries to run the scripts.
sizedist.py: Creating distributions of file sizes per filename for the data-appendix.
Version v3: Updated data appendix introduction, added another stage in the log analysis process in loglead_lo2.py