Metadata-Version: 2.2
Name: cotengra
Version: 0.6.2
Summary: Hyper optimized contraction trees for large tensor networks and einsums.
Home-page: https://github.com/jcmgray/cotengra
Author: Johnnie Gray
Author-email: johnniemcgray@gmail.com
License: Apache
Project-URL: Bug Reports, https://github.com/jcmgray/cotengra/issues
Project-URL: Source, https://github.com/jcmgray/cotengra/
Keywords: tensor network contraction graph hypergraph partition einsum
Classifier: Development Status :: 3 - Alpha
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Requires-Python: >=3.8
Description-Content-Type: text/markdown
License-File: LICENSE.md
Requires-Dist: autoray
Provides-Extra: recommended
Requires-Dist: cotengrust>=0.1.3; extra == "recommended"
Requires-Dist: cytoolz; extra == "recommended"
Requires-Dist: kahypar; extra == "recommended"
Requires-Dist: networkx; extra == "recommended"
Requires-Dist: numpy; extra == "recommended"
Requires-Dist: opt_einsum; extra == "recommended"
Requires-Dist: optuna; extra == "recommended"
Requires-Dist: ray; extra == "recommended"
Requires-Dist: tqdm; extra == "recommended"
Provides-Extra: docs
Requires-Dist: furo; extra == "docs"
Requires-Dist: ipython!=8.7.0; extra == "docs"
Requires-Dist: myst-nb; extra == "docs"
Requires-Dist: setuptools_scm; extra == "docs"
Requires-Dist: sphinx-autoapi; extra == "docs"
Requires-Dist: astroid<3.0.0; extra == "docs"
Requires-Dist: sphinx-copybutton; extra == "docs"
Requires-Dist: sphinx-design; extra == "docs"
Requires-Dist: sphinx>=2.0; extra == "docs"
Provides-Extra: test
Requires-Dist: altair; extra == "test"
Requires-Dist: baytune; extra == "test"
Requires-Dist: chocolate; extra == "test"
Requires-Dist: dask; extra == "test"
Requires-Dist: distributed; extra == "test"
Requires-Dist: kahypar; extra == "test"
Requires-Dist: matplotlib; extra == "test"
Requires-Dist: networkx; extra == "test"
Requires-Dist: nevergrad; extra == "test"
Requires-Dist: numpy; extra == "test"
Requires-Dist: opt_einsum; extra == "test"
Requires-Dist: pytest; extra == "test"
Requires-Dist: seaborn; extra == "test"
Requires-Dist: skopt; extra == "test"
Dynamic: author
Dynamic: author-email
Dynamic: classifier
Dynamic: description
Dynamic: description-content-type
Dynamic: home-page
Dynamic: keywords
Dynamic: license
Dynamic: project-url
Dynamic: provides-extra
Dynamic: requires-dist
Dynamic: requires-python
Dynamic: summary

<p align="left"><img src="https://imgur.com/OM5XyaD.png" alt="cotengra" width="400px"></p>

[![tests](https://github.com/jcmgray/cotengra/actions/workflows/test.yml/badge.svg)](https://github.com/jcmgray/cotengra/actions/workflows/test.yml)
[![codecov](https://codecov.io/gh/jcmgray/cotengra/branch/main/graph/badge.svg?token=Q5evNiuT9S)](https://codecov.io/gh/jcmgray/cotengra)
[![Codacy Badge](https://app.codacy.com/project/badge/Grade/84f825f5a7044762be62600c0650473d)](https://app.codacy.com/gh/jcmgray/cotengra/dashboard?utm_source=gh&utm_medium=referral&utm_content=&utm_campaign=Badge_grade)
[![Docs](https://readthedocs.org/projects/cotengra/badge/?version=latest)](https://cotengra.readthedocs.io)
[![PyPI](https://img.shields.io/pypi/v/cotengra?color=teal)](https://pypi.org/project/cotengra/)
[![Anaconda-Server Badge](https://anaconda.org/conda-forge/cotengra/badges/version.svg)](https://anaconda.org/conda-forge/cotengra)

`cotengra` is a python library for contracting tensor networks or einsum
expressions involving large numbers of tensors - the main docs can be found
at [cotengra.readthedocs.io](https://cotengra.readthedocs.io/).
Some of the key feautures of `cotengra` include:

* drop-in ``einsum`` replacement
* an explicit **contraction tree** object that can be flexibly built, modified and visualized
* a **'hyper optimizer'** that samples trees while tuning the generating meta-paremeters
* **dynamic slicing** for massive memory savings and parallelism
* support for **hyper** edge tensor networks and thus arbitrary einsum equations
* **paths** that can be supplied to [`numpy.einsum`](https://numpy.org/doc/stable/reference/generated/numpy.einsum.html), [`opt_einsum`](https://dgasmith.github.io/opt_einsum/), [`quimb`](https://quimb.readthedocs.io/en/latest/) among others
* **performing contractions** with tensors from many libraries via [`cotengra`](https://github.com/jcmgray/autoray),
  even if they don't provide `einsum` or `tensordot` but do have (batch) matrix
  multiplication

<p align="center"><img src="https://imgur.com/jMO138y.png" alt="cotengra" width="500px"></p>
