A Python toolbox/library for reality-centric machine learning/deep learning on partially-observed time series with PyTorch, including SOTA neural network models supporting tasks of imputation, classification, clustering, and forecasting on incomplete (irregularly-sampled) multivariate time series with NaN missing values/data.

WenjieDu WenjieDu Last update: Mar 04, 2024

Welcome to PyPOTS

a Python toolbox for machine learning on Partially-Observed Time Series

Python version powered by Pytorch the latest release version BSD-3 license Community GitHub contributors GitHub Repo stars GitHub Repo forks Code Climate maintainability Coveralls coverage GitHub Testing Docs building Conda downloads PyPI downloads arXiv DOI

⦿ Motivation: Due to all kinds of reasons like failure of collection sensors, communication error, and unexpected malfunction, missing values are common to see in time series from the real-world environment. This makes partially-observed time series (POTS) a pervasive problem in open-world modeling and prevents advanced data analysis. Although this problem is important, the area of machine learning on POTS still lacks a dedicated toolkit. PyPOTS is created to fill in this blank.

⦿ Mission: PyPOTS (pronounced "Pie Pots") is born to become a handy toolbox that is going to make machine learning on POTS easy rather than tedious, to help engineers and researchers focus more on the core problems in their hands rather than on how to deal with the missing parts in their data. PyPOTS will keep integrating classical and the latest state-of-the-art machine learning algorithms for partially-observed multivariate time series. For sure, besides various algorithms, PyPOTS is going to have unified APIs together with detailed documentation and interactive examples across algorithms as tutorials.

🤗 Please star this repo to help others notice PyPOTS if you think it is a useful toolkit. Please properly cite PyPOTS in your publications if it helps with your research. This really means a lot to our open-source research. Thank you!

The rest of this readme file is organized as follows: ❖ PyPOTS Ecosystem, ❖ Installation, ❖ Usage, ❖ Available Algorithms, ❖ Citing PyPOTS, ❖ Contribution, ❖ Community.

❖ PyPOTS Ecosystem

At PyPOTS, things are related to coffee, which we're familiar with. Yes, this is a coffee universe! As you can see, there is a coffee pot in the PyPOTS logo. And what else? Please read on ;-)

TSDB logo

👈 Time series datasets are taken as coffee beans at PyPOTS, and POTS datasets are incomplete coffee beans with missing parts that have their own meanings. To make various public time-series datasets readily available to users, Time Series Data Beans (TSDB) is created to make loading time-series datasets super easy! Visit TSDB right now to know more about this handy tool 🛠, and it now supports a total of 168 open-source datasets!

PyGrinder logo

👉 To simulate the real-world data beans with missingness, the ecosystem library PyGrinder, a toolkit helping grind your coffee beans into incomplete ones, is created. Missing patterns fall into three categories according to Robin's theory1: MCAR (missing completely at random), MAR (missing at random), and MNAR (missing not at random). PyGrinder supports all of them and additional functionalities related to missingness. With PyGrinder, you can introduce synthetic missing values into your datasets with a single line of code.

BrewPOTS logo

👈 Now we have the beans, the grinder, and the pot, how to brew us a cup of coffee? Tutorials are necessary! Considering the future workload, PyPOTS tutorials are released in a single repo, and you can find them in BrewPOTS. Take a look at it now, and learn how to brew your POTS datasets!

☕️ Welcome to the universe of PyPOTS. Enjoy it and have fun!

❖ Installation

You can refer to the installation instruction in PyPOTS documentation for a guideline with more details.

PyPOTS is available on both PyPI and Anaconda. You can install PyPOTS as shown below:

# via pip
pip install pypots            # the first time installation
pip install pypots --upgrade  # update pypots to the latest version

# via conda
conda install -c conda-forge pypots  # the first time installation
conda update  -c conda-forge pypots  # update pypots to the latest version

Alternatively, you can install from the latest source code with the latest features but may be not officially released yet:

pip install https://github.com/WenjieDu/PyPOTS/archive/main.zip

❖ Usage

Besides BrewPOTS, you can also find a simple and quick-start tutorial notebook on Google Colab with this link. If you have further questions, please refer to PyPOTS documentation docs.pypots.com. You can also raise an issue or ask in our community.

We present you a usage example of imputing missing values in time series with PyPOTS below, you can click it to view.

Click here to see an example applying SAITS on PhysioNet2012 for imputation:
import numpy as np
from sklearn.preprocessing import StandardScaler
from pygrinder import mcar
from pypots.data import load_specific_dataset
from pypots.imputation import SAITS
from pypots.utils.metrics import calc_mae

# Data preprocessing. Tedious, but PyPOTS can help.
data = load_specific_dataset('physionet_2012')  # PyPOTS will automatically download and extract it.
X = data['X']
num_samples = len(X['RecordID'].unique())
X = X.drop(['RecordID', 'Time'], axis = 1)
X = StandardScaler().fit_transform(X.to_numpy())
X = X.reshape(num_samples, 48, -1)
X_ori = X  # keep X_ori for validation
X = mcar(X, 0.1)  # randomly hold out 10% observed values as ground truth
dataset = {"X": X}  # X for model input
print(X.shape)  # (11988, 48, 37), 11988 samples, 48 time steps, 37 features

# Model training. This is PyPOTS showtime.
saits = SAITS(n_steps=48, n_features=37, n_layers=2, d_model=256, d_inner=128, n_heads=4, d_k=64, d_v=64, dropout=0.1, epochs=10)
# Here I use the whole dataset as the training set because ground truth is not visible to the model, you can also split it into train/val/test sets
saits.fit(dataset)
imputation = saits.impute(dataset)  # impute the originally-missing values and artificially-missing values
indicating_mask = np.isnan(X) ^ np.isnan(X_ori)  # indicating mask for imputation error calculation
mae = calc_mae(imputation, np.nan_to_num(X_ori), indicating_mask)  # calculate mean absolute error on the ground truth (artificially-missing values)

❖ Available Algorithms

PyPOTS supports imputation, classification, clustering, and forecasting tasks on multivariate time series with missing values. The currently available algorithms of four tasks are cataloged in the following table with four partitions. The paper references are all listed at the bottom of this readme file. Please refer to them if you want more details.

🌟 Since v0.2, all neural-network models in PyPOTS has got hyperparameter-optimization support. This functionality is implemented with the Microsoft NNI framework.

Imputation 🚥 🚥 🚥
Type Abbr. Full name of the algorithm/model Year
Neural Net SAITS Self-Attention-based Imputation for Time Series 2 2023
Neural Net Transformer Attention is All you Need 3;
Self-Attention-based Imputation for Time Series 2;
Note: proposed in 3, and re-implemented as an imputation model in 2.
2017
Neural Net TimesNet Temporal 2D-Variation Modeling for General Time Series Analysis 4 2023
Neural Net CSDI Conditional Score-based Diffusion Models for Probabilistic Time Series Imputation 5 2021
Neural Net US-GAN Unsupervised GAN for Multivariate Time Series Imputation 6 2021
Neural Net GP-VAE Gaussian Process Variational Autoencoder 7 2020
Neural Net BRITS Bidirectional Recurrent Imputation for Time Series 8 2018
Neural Net M-RNN Multi-directional Recurrent Neural Network 9 2019
Naive LOCF Last Observation Carried Forward -
Classification 🚥 🚥 🚥
Type Abbr. Full name of the algorithm/model/paper Year
Neural Net BRITS Bidirectional Recurrent Imputation for Time Series 8 2018
Neural Net GRU-D Recurrent Neural Networks for Multivariate Time Series with Missing Values 10 2018
Neural Net Raindrop Graph-Guided Network for Irregularly Sampled Multivariate Time Series 11 2022
Clustering 🚥 🚥 🚥
Type Abbr. Full name of the algorithm/model/paper Year
Neural Net CRLI Clustering Representation Learning on Incomplete time-series data 12 2021
Neural Net VaDER Variational Deep Embedding with Recurrence 13 2019
Forecasting 🚥 🚥 🚥
Type Abbr. Full name of the algorithm/model/paper Year
Probabilistic BTTF Bayesian Temporal Tensor Factorization 14 2021

❖ Citing PyPOTS

[Updates in Jun 2023] 🎉A short version of the PyPOTS paper is accepted by the 9th SIGKDD international workshop on Mining and Learning from Time Series (MiLeTS'23)). Besides, PyPOTS has been included as a PyTorch Ecosystem project.

The paper introducing PyPOTS is available on arXiv at this URL, and we are pursuing to publish it in prestigious academic venues, e.g. JMLR (track for Machine Learning Open Source Software). If you use PyPOTS in your work, please cite it as below and 🌟star this repository to make others notice this library. 🤗

There are scientific research projects using PyPOTS and referencing in their papers. Here is an incomplete list of them.

@article{du2023PyPOTS,
title={{PyPOTS: a Python toolbox for data mining on Partially-Observed Time Series}},
author={Wenjie Du},
year={2023},
eprint={2305.18811},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2305.18811},
doi={10.48550/arXiv.2305.18811},
}

Wenjie Du. (2023). PyPOTS: a Python toolbox for data mining on Partially-Observed Time Series. arXiv, abs/2305.18811.https://arxiv.org/abs/2305.18811

❖ Contribution

You're very welcome to contribute to this exciting project!

By committing your code, you'll

  1. make your well-established model out-of-the-box for PyPOTS users to run, and help your work obtain more exposure and impact. Take a look at our inclusion criteria. You can utilize the template folder in each task package (e.g. pypots/imputation/template) to quickly start;
  2. be listed as one of PyPOTS contributors;
  3. get mentioned in our release notes;

You can also contribute to PyPOTS by simply staring🌟 this repo to help more people notice it. Your star is your recognition to PyPOTS, and it matters!

👏 Click here to view PyPOTS stargazers and forkers.
We're so proud to have more and more awesome users, as well as more bright ✨stars:
PyPOTS stargazers
PyPOTS forkers

👀 Check out a full list of our users' affiliations on PyPOTS website here!

❖ Community

We care about the feedback from our users, so we're building PyPOTS community on

  • Slack. General discussion, Q&A, and our development team are here;
  • LinkedIn. Official announcements and news are here;
  • WeChat (微信公众号). We also run a group chat on WeChat, and you can get the QR code from the official account after following it;

If you have any suggestions or want to contribute ideas or share time-series related papers, join us and tell. PyPOTS community is open, transparent, and surely friendly. Let's work together to build and improve PyPOTS!

🏠 Visits PyPOTS visits

Footnotes

  1. Rubin, D. B. (1976). Inference and missing data. Biometrika.

  2. Du, W., Cote, D., & Liu, Y. (2023). SAITS: Self-Attention-based Imputation for Time Series. Expert systems with applications. 2 3

  3. Vaswani, A., Shazeer, N.M., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., & Polosukhin, I. (2017). Attention is All you Need. NeurIPS 2017. 2

  4. Wu, H., Hu, T., Liu, Y., Zhou, H., Wang, J., & Long, M. (2023). TimesNet: Temporal 2d-variation modeling for general time series analysis. ICLR 2023

  5. Tashiro, Y., Song, J., Song, Y., & Ermon, S. (2021). CSDI: Conditional Score-based Diffusion Models for Probabilistic Time Series Imputation. NeurIPS 2021.

  6. Miao, X., Wu, Y., Wang, J., Gao, Y., Mao, X., & Yin, J. (2021). Generative Semi-supervised Learning for Multivariate Time Series Imputation. AAAI 2021.

  7. Fortuin, V., Baranchuk, D., Raetsch, G. & Mandt, S. (2020). GP-VAE: Deep Probabilistic Time Series Imputation. AISTATS 2020.

  8. Cao, W., Wang, D., Li, J., Zhou, H., Li, L., & Li, Y. (2018). BRITS: Bidirectional Recurrent Imputation for Time Series. NeurIPS 2018. 2

  9. Yoon, J., Zame, W. R., & van der Schaar, M. (2019). Estimating Missing Data in Temporal Data Streams Using Multi-Directional Recurrent Neural Networks. IEEE Transactions on Biomedical Engineering.

  10. Che, Z., Purushotham, S., Cho, K., Sontag, D.A., & Liu, Y. (2018). Recurrent Neural Networks for Multivariate Time Series with Missing Values. Scientific Reports.

  11. Zhang, X., Zeman, M., Tsiligkaridis, T., & Zitnik, M. (2022). Graph-Guided Network for Irregularly Sampled Multivariate Time Series. ICLR 2022.

  12. Ma, Q., Chen, C., Li, S., & Cottrell, G. W. (2021). Learning Representations for Incomplete Time Series Clustering. AAAI 2021.

  13. Jong, J.D., Emon, M.A., Wu, P., Karki, R., Sood, M., Godard, P., Ahmad, A., Vrooman, H.A., Hofmann-Apitius, M., & Fröhlich, H. (2019). Deep learning for clustering of multivariate clinical patient trajectories with missing values. GigaScience.

  14. Chen, X., & Sun, L. (2021). Bayesian Temporal Factorization for Multidimensional Time Series Prediction. IEEE transactions on pattern analysis and machine intelligence.

Subscribe to our newsletter