Metadata-Version: 2.4
Name: ai-edge-litert-nightly
Version: 2.2.0.dev20260410
Summary: LiteRT is for mobile and embedded devices.
Home-page: https://www.tensorflow.org/lite/
Author: Google AI Edge Authors
Author-email: packages@tensorflow.org
License: Apache 2.0
Keywords: litert tflite tensorflow tensor machine learning
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Education
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Topic :: Scientific/Engineering
Classifier: Topic :: Scientific/Engineering :: Mathematics
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development
Classifier: Topic :: Software Development :: Libraries
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Description-Content-Type: text/plain
Requires-Dist: backports.strenum
Requires-Dist: flatbuffers
Requires-Dist: numpy>=1.23.2
Requires-Dist: tqdm
Requires-Dist: typing-extensions
Requires-Dist: protobuf
Provides-Extra: npu-sdk
Requires-Dist: ai-edge-litert-sdk-qualcomm~=0.1.0; extra == "npu-sdk"
Requires-Dist: ai-edge-litert-sdk-mediatek~=0.1.0; extra == "npu-sdk"
Provides-Extra: model-utils
Requires-Dist: lark; extra == "model-utils"
Requires-Dist: ml_dtypes; extra == "model-utils"
Requires-Dist: xdsl==0.28.0; extra == "model-utils"
Dynamic: author
Dynamic: author-email
Dynamic: classifier
Dynamic: description
Dynamic: description-content-type
Dynamic: home-page
Dynamic: keywords
Dynamic: license
Dynamic: provides-extra
Dynamic: requires-dist
Dynamic: summary

LiteRT is the official solution for running machine learning models on mobile
and embedded devices. It enables on-device machine learning inference with low
latency and a small binary size on Android, iOS, and other operating systems.
