Metadata-Version: 2.1
Name: jina
Version: 3.19.1
Summary: Multimodal AI services & pipelines with cloud-native stack: gRPC, Kubernetes, Docker, OpenTelemetry, Prometheus, Jaeger, etc.
Home-page: https://github.com/jina-ai/jina/
Download-URL: https://github.com/jina-ai/jina/tags
Author: Jina AI
Author-email: hello@jina.ai
License: Apache 2.0
Project-URL: Documentation, https://docs.jina.ai
Project-URL: Source, https://github.com/jina-ai/jina/
Project-URL: Tracker, https://github.com/jina-ai/jina/issues
Keywords: jina cloud-native cross-modal multimodal neural-search query search index elastic neural-network encoding embedding serving docker container image video audio deep-learning mlops
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Education
Classifier: Intended Audience :: Science/Research
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Unix Shell
Classifier: Environment :: Console
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Operating System :: OS Independent
Classifier: Topic :: Database :: Database Engines/Servers
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Internet :: WWW/HTTP :: Indexing/Search
Classifier: Topic :: Scientific/Engineering :: Image Recognition
Classifier: Topic :: Multimedia :: Video
Classifier: Topic :: Scientific/Engineering
Classifier: Topic :: Scientific/Engineering :: Mathematics
Classifier: Topic :: Software Development
Classifier: Topic :: Software Development :: Libraries
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: opentelemetry-instrumentation-fastapi (>=0.33b0)
Requires-Dist: opentelemetry-exporter-otlp (>=1.12.0)
Requires-Dist: python-multipart
Requires-Dist: docarray (>=0.16.4)
Requires-Dist: aiohttp
Requires-Dist: opentelemetry-exporter-otlp-proto-grpc (>=1.13.0)
Requires-Dist: opentelemetry-api (>=1.12.0)
Requires-Dist: opentelemetry-instrumentation-aiohttp-client (>=0.33b0)
Requires-Dist: websockets
Requires-Dist: pydantic (<2.0.0)
Requires-Dist: filelock
Requires-Dist: grpcio (>=1.49.0)
Requires-Dist: jcloud (>=0.0.35)
Requires-Dist: grpcio-health-checking (>=1.49.0)
Requires-Dist: pyyaml (>=5.3.1)
Requires-Dist: docker
Requires-Dist: opentelemetry-exporter-prometheus (>=1.12.0rc1)
Requires-Dist: protobuf (>=3.19.0)
Requires-Dist: uvicorn[standard]
Requires-Dist: fastapi (>=0.76.0)
Requires-Dist: prometheus-client (>=0.12.0)
Requires-Dist: numpy
Requires-Dist: grpcio-reflection (>=1.49.0)
Requires-Dist: jina-hubble-sdk (>=0.30.4)
Requires-Dist: pathspec
Requires-Dist: opentelemetry-instrumentation-grpc (>=0.35b0)
Requires-Dist: requests
Requires-Dist: urllib3 (<2.0.0,>=1.25.9)
Requires-Dist: packaging (>=20.0)
Requires-Dist: opentelemetry-sdk (>=1.14.0)
Requires-Dist: aiofiles
Requires-Dist: uvloop ; platform_system != "Windows"
Provides-Extra: pillow
Requires-Dist: Pillow ; extra == 'pillow'
Provides-Extra: aiofiles
Requires-Dist: aiofiles ; extra == 'aiofiles'
Provides-Extra: aiohttp
Requires-Dist: aiohttp ; extra == 'aiohttp'
Provides-Extra: all
Requires-Dist: psutil ; extra == 'all'
Requires-Dist: opentelemetry-instrumentation-fastapi (>=0.33b0) ; extra == 'all'
Requires-Dist: opentelemetry-exporter-otlp (>=1.12.0) ; extra == 'all'
Requires-Dist: torch ; extra == 'all'
Requires-Dist: docarray (>=0.16.4) ; extra == 'all'
Requires-Dist: python-multipart ; extra == 'all'
Requires-Dist: scipy (>=1.6.1) ; extra == 'all'
Requires-Dist: pytest-mock ; extra == 'all'
Requires-Dist: pytest-cov (==3.0.0) ; extra == 'all'
Requires-Dist: aiohttp ; extra == 'all'
Requires-Dist: grpcio (<1.48.1,>=1.46.0) ; extra == 'all'
Requires-Dist: opentelemetry-exporter-otlp-proto-grpc (>=1.13.0) ; extra == 'all'
Requires-Dist: opentelemetry-api (>=1.12.0) ; extra == 'all'
Requires-Dist: opentelemetry-instrumentation-aiohttp-client (>=0.33b0) ; extra == 'all'
Requires-Dist: portforward (<0.4.3,>=0.2.4) ; extra == 'all'
Requires-Dist: pytest-lazy-fixture ; extra == 'all'
Requires-Dist: websockets ; extra == 'all'
Requires-Dist: requests-mock ; extra == 'all'
Requires-Dist: coverage (==6.2) ; extra == 'all'
Requires-Dist: Pillow ; extra == 'all'
Requires-Dist: pydantic (<2.0.0) ; extra == 'all'
Requires-Dist: kubernetes (>=18.20.0) ; extra == 'all'
Requires-Dist: filelock ; extra == 'all'
Requires-Dist: prometheus-api-client (>=0.5.1) ; extra == 'all'
Requires-Dist: opentelemetry-test-utils (>=0.33b0) ; extra == 'all'
Requires-Dist: pytest-repeat ; extra == 'all'
Requires-Dist: jsonschema ; extra == 'all'
Requires-Dist: jcloud (>=0.0.35) ; extra == 'all'
Requires-Dist: grpcio-reflection (<1.48.1,>=1.46.0) ; extra == 'all'
Requires-Dist: pytest-timeout ; extra == 'all'
Requires-Dist: pytest-asyncio ; extra == 'all'
Requires-Dist: bs4 ; extra == 'all'
Requires-Dist: pyyaml (>=5.3.1) ; extra == 'all'
Requires-Dist: strawberry-graphql (>=0.96.0) ; extra == 'all'
Requires-Dist: docker ; extra == 'all'
Requires-Dist: watchfiles (>=0.18.0) ; extra == 'all'
Requires-Dist: tensorflow (>=2.0) ; extra == 'all'
Requires-Dist: pytest-kind (==22.11.1) ; extra == 'all'
Requires-Dist: opentelemetry-exporter-prometheus (>=1.12.0rc1) ; extra == 'all'
Requires-Dist: pytest-custom-exit-code ; extra == 'all'
Requires-Dist: protobuf (>=3.19.0) ; extra == 'all'
Requires-Dist: pytest-reraise ; extra == 'all'
Requires-Dist: uvicorn[standard] ; extra == 'all'
Requires-Dist: fastapi (>=0.76.0) ; extra == 'all'
Requires-Dist: prometheus-client (>=0.12.0) ; extra == 'all'
Requires-Dist: grpcio-health-checking (<1.48.1,>=1.46.0) ; extra == 'all'
Requires-Dist: black (==22.3.0) ; extra == 'all'
Requires-Dist: numpy ; extra == 'all'
Requires-Dist: jina-hubble-sdk (>=0.30.4) ; extra == 'all'
Requires-Dist: pathspec ; extra == 'all'
Requires-Dist: opentelemetry-instrumentation-grpc (>=0.35b0) ; extra == 'all'
Requires-Dist: requests ; extra == 'all'
Requires-Dist: mock ; extra == 'all'
Requires-Dist: urllib3 (<2.0.0,>=1.25.9) ; extra == 'all'
Requires-Dist: packaging (>=20.0) ; extra == 'all'
Requires-Dist: pytest ; extra == 'all'
Requires-Dist: sgqlc ; extra == 'all'
Requires-Dist: opentelemetry-sdk (>=1.14.0) ; extra == 'all'
Requires-Dist: flaky ; extra == 'all'
Requires-Dist: aiofiles ; extra == 'all'
Requires-Dist: uvloop ; (platform_system != "Windows") and extra == 'all'
Provides-Extra: black
Requires-Dist: black (==22.3.0) ; extra == 'black'
Provides-Extra: bs4
Requires-Dist: bs4 ; extra == 'bs4'
Provides-Extra: cicd
Requires-Dist: jsonschema ; extra == 'cicd'
Requires-Dist: torch ; extra == 'cicd'
Requires-Dist: bs4 ; extra == 'cicd'
Requires-Dist: sgqlc ; extra == 'cicd'
Requires-Dist: portforward (<0.4.3,>=0.2.4) ; extra == 'cicd'
Requires-Dist: strawberry-graphql (>=0.96.0) ; extra == 'cicd'
Requires-Dist: tensorflow (>=2.0) ; extra == 'cicd'
Provides-Extra: core
Requires-Dist: pydantic (<2.0.0) ; extra == 'core'
Requires-Dist: protobuf (>=3.19.0) ; extra == 'core'
Requires-Dist: jcloud (>=0.0.35) ; extra == 'core'
Requires-Dist: grpcio-reflection (<1.48.1,>=1.46.0) ; extra == 'core'
Requires-Dist: docarray (>=0.16.4) ; extra == 'core'
Requires-Dist: urllib3 (<2.0.0,>=1.25.9) ; extra == 'core'
Requires-Dist: grpcio-health-checking (<1.48.1,>=1.46.0) ; extra == 'core'
Requires-Dist: grpcio (<1.48.1,>=1.46.0) ; extra == 'core'
Requires-Dist: packaging (>=20.0) ; extra == 'core'
Requires-Dist: jina-hubble-sdk (>=0.30.4) ; extra == 'core'
Requires-Dist: opentelemetry-api (>=1.12.0) ; extra == 'core'
Requires-Dist: numpy ; extra == 'core'
Requires-Dist: opentelemetry-instrumentation-grpc (>=0.35b0) ; extra == 'core'
Requires-Dist: pyyaml (>=5.3.1) ; extra == 'core'
Provides-Extra: coverage
Requires-Dist: coverage (==6.2) ; extra == 'coverage'
Provides-Extra: devel
Requires-Dist: opentelemetry-instrumentation-fastapi (>=0.33b0) ; extra == 'devel'
Requires-Dist: opentelemetry-exporter-otlp (>=1.12.0) ; extra == 'devel'
Requires-Dist: python-multipart ; extra == 'devel'
Requires-Dist: aiohttp ; extra == 'devel'
Requires-Dist: opentelemetry-exporter-otlp-proto-grpc (>=1.13.0) ; extra == 'devel'
Requires-Dist: opentelemetry-instrumentation-aiohttp-client (>=0.33b0) ; extra == 'devel'
Requires-Dist: websockets ; extra == 'devel'
Requires-Dist: filelock ; extra == 'devel'
Requires-Dist: strawberry-graphql (>=0.96.0) ; extra == 'devel'
Requires-Dist: watchfiles (>=0.18.0) ; extra == 'devel'
Requires-Dist: docker ; extra == 'devel'
Requires-Dist: opentelemetry-exporter-prometheus (>=1.12.0rc1) ; extra == 'devel'
Requires-Dist: uvicorn[standard] ; extra == 'devel'
Requires-Dist: fastapi (>=0.76.0) ; extra == 'devel'
Requires-Dist: prometheus-client (>=0.12.0) ; extra == 'devel'
Requires-Dist: pathspec ; extra == 'devel'
Requires-Dist: requests ; extra == 'devel'
Requires-Dist: sgqlc ; extra == 'devel'
Requires-Dist: opentelemetry-sdk (>=1.14.0) ; extra == 'devel'
Requires-Dist: aiofiles ; extra == 'devel'
Requires-Dist: uvloop ; (platform_system != "Windows") and extra == 'devel'
Provides-Extra: docarray
Requires-Dist: docarray (>=0.16.4) ; extra == 'docarray'
Provides-Extra: docker
Requires-Dist: docker ; extra == 'docker'
Provides-Extra: fastapi
Requires-Dist: fastapi (>=0.76.0) ; extra == 'fastapi'
Provides-Extra: filelock
Requires-Dist: filelock ; extra == 'filelock'
Provides-Extra: flaky
Requires-Dist: flaky ; extra == 'flaky'
Provides-Extra: grpcio
Requires-Dist: grpcio (<1.48.1,>=1.46.0) ; extra == 'grpcio'
Provides-Extra: grpcio-health-checking
Requires-Dist: grpcio-health-checking (<1.48.1,>=1.46.0) ; extra == 'grpcio-health-checking'
Provides-Extra: grpcio-reflection
Requires-Dist: grpcio-reflection (<1.48.1,>=1.46.0) ; extra == 'grpcio-reflection'
Provides-Extra: jcloud
Requires-Dist: jcloud (>=0.0.35) ; extra == 'jcloud'
Provides-Extra: jina-hubble-sdk
Requires-Dist: jina-hubble-sdk (>=0.30.4) ; extra == 'jina-hubble-sdk'
Provides-Extra: jsonschema
Requires-Dist: jsonschema ; extra == 'jsonschema'
Provides-Extra: kubernetes
Requires-Dist: kubernetes (>=18.20.0) ; extra == 'kubernetes'
Provides-Extra: mock
Requires-Dist: mock ; extra == 'mock'
Provides-Extra: numpy
Requires-Dist: numpy ; extra == 'numpy'
Provides-Extra: opentelemetry-api
Requires-Dist: opentelemetry-api (>=1.12.0) ; extra == 'opentelemetry-api'
Provides-Extra: opentelemetry-exporter-otlp
Requires-Dist: opentelemetry-exporter-otlp (>=1.12.0) ; extra == 'opentelemetry-exporter-otlp'
Provides-Extra: opentelemetry-exporter-otlp-proto-grpc
Requires-Dist: opentelemetry-exporter-otlp-proto-grpc (>=1.13.0) ; extra == 'opentelemetry-exporter-otlp-proto-grpc'
Provides-Extra: opentelemetry-exporter-prometheus
Requires-Dist: opentelemetry-exporter-prometheus (>=1.12.0rc1) ; extra == 'opentelemetry-exporter-prometheus'
Provides-Extra: opentelemetry-instrumentation-aiohttp-client
Requires-Dist: opentelemetry-instrumentation-aiohttp-client (>=0.33b0) ; extra == 'opentelemetry-instrumentation-aiohttp-client'
Provides-Extra: opentelemetry-instrumentation-fastapi
Requires-Dist: opentelemetry-instrumentation-fastapi (>=0.33b0) ; extra == 'opentelemetry-instrumentation-fastapi'
Provides-Extra: opentelemetry-instrumentation-grpc
Requires-Dist: opentelemetry-instrumentation-grpc (>=0.35b0) ; extra == 'opentelemetry-instrumentation-grpc'
Provides-Extra: opentelemetry-sdk
Requires-Dist: opentelemetry-sdk (>=1.14.0) ; extra == 'opentelemetry-sdk'
Provides-Extra: opentelemetry-test-utils
Requires-Dist: opentelemetry-test-utils (>=0.33b0) ; extra == 'opentelemetry-test-utils'
Provides-Extra: packaging
Requires-Dist: packaging (>=20.0) ; extra == 'packaging'
Provides-Extra: pathspec
Requires-Dist: pathspec ; extra == 'pathspec'
Provides-Extra: perf
Requires-Dist: opentelemetry-exporter-prometheus (>=1.12.0rc1) ; extra == 'perf'
Requires-Dist: opentelemetry-instrumentation-fastapi (>=0.33b0) ; extra == 'perf'
Requires-Dist: opentelemetry-exporter-otlp (>=1.12.0) ; extra == 'perf'
Requires-Dist: prometheus-client (>=0.12.0) ; extra == 'perf'
Requires-Dist: uvloop ; extra == 'perf'
Requires-Dist: opentelemetry-exporter-otlp-proto-grpc (>=1.13.0) ; extra == 'perf'
Requires-Dist: opentelemetry-sdk (>=1.14.0) ; extra == 'perf'
Requires-Dist: opentelemetry-instrumentation-aiohttp-client (>=0.33b0) ; extra == 'perf'
Provides-Extra: portforward
Requires-Dist: portforward (<0.4.3,>=0.2.4) ; extra == 'portforward'
Provides-Extra: prometheus-api-client
Requires-Dist: prometheus-api-client (>=0.5.1) ; extra == 'prometheus-api-client'
Provides-Extra: prometheus_client
Requires-Dist: prometheus-client (>=0.12.0) ; extra == 'prometheus_client'
Provides-Extra: protobuf
Requires-Dist: protobuf (>=3.19.0) ; extra == 'protobuf'
Provides-Extra: psutil
Requires-Dist: psutil ; extra == 'psutil'
Provides-Extra: pydantic
Requires-Dist: pydantic (<2.0.0) ; extra == 'pydantic'
Provides-Extra: pytest
Requires-Dist: pytest ; extra == 'pytest'
Provides-Extra: pytest-asyncio
Requires-Dist: pytest-asyncio ; extra == 'pytest-asyncio'
Provides-Extra: pytest-cov
Requires-Dist: pytest-cov (==3.0.0) ; extra == 'pytest-cov'
Provides-Extra: pytest-custom_exit_code
Requires-Dist: pytest-custom-exit-code ; extra == 'pytest-custom_exit_code'
Provides-Extra: pytest-kind
Requires-Dist: pytest-kind (==22.11.1) ; extra == 'pytest-kind'
Provides-Extra: pytest-lazy-fixture
Requires-Dist: pytest-lazy-fixture ; extra == 'pytest-lazy-fixture'
Provides-Extra: pytest-mock
Requires-Dist: pytest-mock ; extra == 'pytest-mock'
Provides-Extra: pytest-repeat
Requires-Dist: pytest-repeat ; extra == 'pytest-repeat'
Provides-Extra: pytest-reraise
Requires-Dist: pytest-reraise ; extra == 'pytest-reraise'
Provides-Extra: pytest-timeout
Requires-Dist: pytest-timeout ; extra == 'pytest-timeout'
Provides-Extra: python-multipart
Requires-Dist: python-multipart ; extra == 'python-multipart'
Provides-Extra: pyyaml
Requires-Dist: pyyaml (>=5.3.1) ; extra == 'pyyaml'
Provides-Extra: requests
Requires-Dist: requests ; extra == 'requests'
Provides-Extra: requests-mock
Requires-Dist: requests-mock ; extra == 'requests-mock'
Provides-Extra: scipy
Requires-Dist: scipy (>=1.6.1) ; extra == 'scipy'
Provides-Extra: sgqlc
Requires-Dist: sgqlc ; extra == 'sgqlc'
Provides-Extra: standard
Requires-Dist: opentelemetry-exporter-prometheus (>=1.12.0rc1) ; extra == 'standard'
Requires-Dist: filelock ; extra == 'standard'
Requires-Dist: requests ; extra == 'standard'
Requires-Dist: opentelemetry-instrumentation-fastapi (>=0.33b0) ; extra == 'standard'
Requires-Dist: opentelemetry-exporter-otlp (>=1.12.0) ; extra == 'standard'
Requires-Dist: pathspec ; extra == 'standard'
Requires-Dist: uvicorn[standard] ; extra == 'standard'
Requires-Dist: python-multipart ; extra == 'standard'
Requires-Dist: fastapi (>=0.76.0) ; extra == 'standard'
Requires-Dist: prometheus-client (>=0.12.0) ; extra == 'standard'
Requires-Dist: uvloop ; extra == 'standard'
Requires-Dist: aiohttp ; extra == 'standard'
Requires-Dist: opentelemetry-sdk (>=1.14.0) ; extra == 'standard'
Requires-Dist: opentelemetry-instrumentation-aiohttp-client (>=0.33b0) ; extra == 'standard'
Requires-Dist: aiofiles ; extra == 'standard'
Requires-Dist: docker ; extra == 'standard'
Requires-Dist: websockets ; extra == 'standard'
Provides-Extra: standrad
Requires-Dist: opentelemetry-exporter-otlp-proto-grpc (>=1.13.0) ; extra == 'standrad'
Provides-Extra: strawberry-graphql
Requires-Dist: strawberry-graphql (>=0.96.0) ; extra == 'strawberry-graphql'
Provides-Extra: tensorflow
Requires-Dist: tensorflow (>=2.0) ; extra == 'tensorflow'
Provides-Extra: test
Requires-Dist: psutil ; extra == 'test'
Requires-Dist: scipy (>=1.6.1) ; extra == 'test'
Requires-Dist: pytest-mock ; extra == 'test'
Requires-Dist: pytest-cov (==3.0.0) ; extra == 'test'
Requires-Dist: pytest-lazy-fixture ; extra == 'test'
Requires-Dist: requests-mock ; extra == 'test'
Requires-Dist: coverage (==6.2) ; extra == 'test'
Requires-Dist: Pillow ; extra == 'test'
Requires-Dist: kubernetes (>=18.20.0) ; extra == 'test'
Requires-Dist: prometheus-api-client (>=0.5.1) ; extra == 'test'
Requires-Dist: opentelemetry-test-utils (>=0.33b0) ; extra == 'test'
Requires-Dist: pytest-repeat ; extra == 'test'
Requires-Dist: pytest-timeout ; extra == 'test'
Requires-Dist: pytest-asyncio ; extra == 'test'
Requires-Dist: pytest-kind (==22.11.1) ; extra == 'test'
Requires-Dist: pytest-custom-exit-code ; extra == 'test'
Requires-Dist: pytest-reraise ; extra == 'test'
Requires-Dist: black (==22.3.0) ; extra == 'test'
Requires-Dist: mock ; extra == 'test'
Requires-Dist: pytest ; extra == 'test'
Requires-Dist: flaky ; extra == 'test'
Provides-Extra: torch
Requires-Dist: torch ; extra == 'torch'
Provides-Extra: urllib3
Requires-Dist: urllib3 (<2.0.0,>=1.25.9) ; extra == 'urllib3'
Provides-Extra: uvicorn_standard_
Requires-Dist: uvicorn[standard] ; extra == 'uvicorn_standard_'
Provides-Extra: uvloop
Requires-Dist: uvloop ; extra == 'uvloop'
Provides-Extra: watchfiles
Requires-Dist: watchfiles (>=0.18.0) ; extra == 'watchfiles'
Provides-Extra: websockets
Requires-Dist: websockets ; extra == 'websockets'

<p align="center">
<!-- survey banner start -->
<a href="https://10sw1tcpld4.typeform.com/to/EGAEReM7?utm_source=readme&utm_medium=github&utm_campaign=user%20experience&utm_term=feb2023&utm_content=survey">
  <img src="./.github/banner.svg?raw=true">
</a>
<!-- survey banner start -->

<p align="center">
<a href="https://docs.jina.ai"><img src="https://github.com/jina-ai/jina/blob/master/docs/_static/logo-light.svg?raw=true" alt="Jina logo: Build multimodal AI services via cloud native technologies · Neural Search · Generative AI · Cloud Native" width="150px"></a>
</p>

<p align="center">
<b>Build multimodal AI services with cloud native technologies</b>
</p>

<p align=center>
<a href="https://pypi.org/project/jina/"><img alt="PyPI" src="https://img.shields.io/pypi/v/jina?label=Release&style=flat-square"></a>
<!--<a href="https://codecov.io/gh/jina-ai/jina"><img alt="Codecov branch" src="https://img.shields.io/codecov/c/github/jina-ai/jina/master?&logo=Codecov&logoColor=white&style=flat-square"></a>-->
<a href="https://discord.jina.ai"><img src="https://img.shields.io/discord/1106542220112302130?logo=discord&logoColor=white&style=flat-square"></a>
<a href="https://pypistats.org/packages/jina"><img alt="PyPI - Downloads from official pypistats" src="https://img.shields.io/pypi/dm/jina?style=flat-square"></a>
<a href="https://github.com/jina-ai/jina/actions/workflows/cd.yml"><img alt="Github CD status" src="https://github.com/jina-ai/jina/actions/workflows/cd.yml/badge.svg"></a>
</p>

<!-- start jina-description -->

Jina lets you build multimodal [**AI services**](#build-ai-services) and [**pipelines**](#build-a-pipeline) that communicate via gRPC, HTTP and WebSockets, then scale them up and deploy to production. You can focus on your logic and algorithms, without worrying about the infrastructure complexity.

![](./.github/images/build-deploy.png)

Jina provides a smooth Pythonic experience transitioning from local deployment to advanced orchestration frameworks like Docker-Compose, Kubernetes, or Jina AI Cloud. Jina makes advanced solution engineering and cloud-native technologies accessible to every developer.

- Build applications for any [data type](https://docs.docarray.org/data_types/first_steps/), any mainstream [deep learning framework](), and any [protocol](https://docs.jina.ai/concepts/serving/gateway/#set-protocol-in-python).
- Design high-performance microservices, with [easy scaling](https://docs.jina.ai/concepts/orchestration/scale-out/), duplex client-server streaming, and async/non-blocking data processing over dynamic flows.
- Docker container integration via [Executor Hub](https://cloud.jina.ai), OpenTelemetry/Prometheus observability, and fast Kubernetes/Docker-Compose deployment.
- CPU/GPU hosting via [Jina AI Cloud](https://cloud.jina.ai).

<details>
    <summary><strong>Wait, how is Jina different from FastAPI?</strong></summary>
Jina's value proposition may seem quite similar to that of FastAPI. However, there are several fundamental differences:

 **Data structure and communication protocols**
  - FastAPI communication relies on Pydantic and Jina relies on [DocArray](https://github.com/docarray/docarray) allowing Jina to support multiple protocols
  to expose its services.

 **Advanced orchestration and scaling capabilities**
  - Jina lets you deploy applications formed from multiple microservices that can be containerized and scaled independently.
  - Jina allows you to easily containerize and orchestrate your services, providing concurrency and scalability.

 **Journey to the cloud**
  - Jina provides a smooth transition from local development (using [DocArray](https://github.com/docarray/docarray)) to local serving using (Jina's orchestration layer)
  to having production-ready services by using Kubernetes capacity to orchestrate the lifetime of containers.
  - By using [Jina AI Cloud](https://cloud.jina.ai) you have access to scalable and serverless deployments of your applications in one command.
</details>

<!-- end jina-description -->

## [Documentation](https://docs.jina.ai)

## Install 

```bash
pip install jina
```

Find more install options on [Apple Silicon](https://docs.jina.ai/get-started/install/apple-silicon-m1-m2/)/[Windows](https://docs.jina.ai/get-started/install/windows/).

## Get Started

### Basic Concepts

Jina has three fundamental layers:

- Data layer: [**BaseDoc**](https://docarray.docs.org/) and [**DocList**](https://docarray.docs.org/) (from [DocArray](https://github.com/docarray/docarray)) is the input/output format in Jina.
- Serving layer: An [**Executor**](https://docs.jina.ai/concepts/serving/executor/) is a Python class that transforms and processes Documents. [**Gateway**](https://docs.jina.ai/concepts/serving/gateway/) is the service making sure connecting all Executors inside a Flow.
- Orchestration layer:  [**Deployment**](https://docs.jina.ai/concepts/orchestration/deployment) serves a single Executor, while a [**Flow**](https://docs.jina.ai/concepts/orchestration/flow/) serves Executors chained into a pipeline.


[The full glossary is explained here](https://docs.jina.ai/concepts/preliminaries/#).

### Build AI Services
<!-- start build-ai-services -->

Let's build a fast, reliable and scalable gRPC-based AI service. In Jina we call this an **[Executor](https://docs.jina.ai/concepts/serving/executor/)**. Our simple Executor will wrap the [StableLM](https://huggingface.co/stabilityai/stablelm-base-alpha-3b) LLM from Stability AI. We'll then use a **Deployment** to serve it.

![](./.github/images/deployment-diagram.png)

> **Note**
> A Deployment serves just one Executor. To combine multiple Executors into a pipeline and serve that, use a [Flow](#build-a-pipeline).

Let's implement the service's logic:

<table>
<tr>
<th><code>executor.py</code></th> 
<tr>
<td>

```python
from jina import Executor, requests
from docarray import DocList, BaseDoc

from transformers import pipeline


class Prompt(BaseDoc):
    text: str


class Generation(BaseDoc):
    prompt: str
    text: str


class StableLM(Executor):
    def __init__(self, **kwargs):
        super().__init__(**kwargs)
        self.generator = pipeline(
            'text-generation', model='stabilityai/stablelm-base-alpha-3b'
        )

    @requests
    def generate(self, docs: DocList[Prompt], **kwargs) -> DocList[Generation]:
        generations = DocList[Generation]()
        prompts = docs.text
        llm_outputs = self.generator(prompts)
        for prompt, output in zip(prompts, llm_outputs):
            generations.append(Generation(prompt=prompt, text=output))
        return generations

```

</td>
</tr>
</table>

Then we deploy it with either the Python API or YAML:
<div class="table-wrapper">
<table>
<tr>
<th> Python API: <code>deployment.py</code> </th> 
<th> YAML: <code>deployment.yml</code> </th>
</tr>
<tr>
<td>

```python
from jina import Deployment
from executor import StableLM

dep = Deployment(uses=StableLM, timeout_ready=-1, port=12345)

with dep:
    dep.block()
```

</td>
<td>

```yaml
jtype: Deployment
with:
  uses: StableLM
  py_modules:
    - executor.py
  timeout_ready: -1
  port: 12345
```

And run the YAML Deployment with the CLI: `jina deployment --uses deployment.yml`

</td>
</tr>
</table>
</div>

Use [Jina Client](https://docs.jina.ai/concepts/client/) to make requests to the service:

```python
from jina import Client
from docarray import DocList, BaseDoc


class Prompt(BaseDoc):
    text: str


class Generation(BaseDoc):
    prompt: str
    text: str


prompt = Prompt(
    text='suggest an interesting image generation prompt for a mona lisa variant'
)

client = Client(port=12345)  # use port from output above
response = client.post(on='/', inputs=[prompt], return_type=DocList[Generation])

print(response[0].text)
```

```text
a steampunk version of the Mona Lisa, incorporating mechanical gears, brass elements, and Victorian era clothing details
```

<!-- end build-ai-services -->

> **Note**
> In a notebook, you can't use `deployment.block()` and then make requests to the client. Please refer to the Colab link above for reproducible Jupyter Notebook code snippets.

### Build a pipeline

<!-- start build-pipelines -->

Sometimes you want to chain microservices together into a pipeline. That's where a [Flow](https://docs.jina.ai/concepts/orchestration/flow/) comes in.

A Flow is a [DAG](https://en.wikipedia.org/wiki/Directed_acyclic_graph) pipeline, composed of a set of steps, It orchestrates a set of [Executors](https://docs.jina.ai/concepts/serving/executor/) and a [Gateway](https://docs.jina.ai/concepts/serving/gateway/) to offer an end-to-end service.

> **Note**
> If you just want to serve a single Executor, you can use a [Deployment](#build-ai--ml-services).

For instance, let's combine [our StableLM language model](#build-ai--ml-services) with a Stable Diffusion image generation service. Chaining these services together into a [Flow](https://docs.jina.ai/concepts/orchestration/flow/) will give us a service that will generate images based on a prompt generated by the LLM.


<table>
<tr>
<th><code>text_to_image.py</code></th> 
<tr>
<td>

```python
import numpy as np
from jina import Executor, requests
from docarray import BaseDoc, DocList
from docarray.documents import ImageDoc


class Generation(BaseDoc):
    prompt: str
    text: str


class TextToImage(Executor):
    def __init__(self, **kwargs):
        super().__init__(**kwargs)
        from diffusers import StableDiffusionPipeline
        import torch

        self.pipe = StableDiffusionPipeline.from_pretrained(
            "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16
        ).to("cuda")

    @requests
    def generate_image(self, docs: DocList[Generation], **kwargs) -> DocList[ImageDoc]:
        images = self.pipe(
            docs.text
        ).images  # image here is in [PIL format](https://pillow.readthedocs.io/en/stable/)
        docs.tensor = np.array(images)
```

</td>
</tr>
</table>


![](./.github/images/flow-diagram.png)

Build the Flow with either Python or YAML:

<div class="table-wrapper">
<table>
<tr>
<th> Python API: <code>flow.py</code> </th> 
<th> YAML: <code>flow.yml</code> </th>
</tr>
<tr>
<td>

```python
from jina import Flow
from executor import StableLM
from text_to_image import TextToImage

flow = (
    Flow(port=12345)
    .add(uses=StableLM, timeout_ready=-1)
    .add(uses=TextToImage, timeout_ready=-1)
)

with flow:
    flow.block()
```

</td>
<td>

```yaml
jtype: Flow
with:
    port: 12345
executors:
  - uses: StableLM
    timeout_ready: -1
    py_modules:
      - executor.py
  - uses: TextToImage
    timeout_ready: -1
    py_modules:
      - text_to_image.py
```

Then run the YAML Flow with the CLI: `jina flow --uses flow.yml`

</td>
</tr>
</table>
</div>

Then, use [Jina Client](https://docs.jina.ai/concepts/client/) to make requests to the Flow:

```python
from jina import Client
from docarray import DocList, BaseDoc
from docarray.documents import ImageDoc


class Prompt(BaseDoc):
    text: str


prompt = Prompt(
    text='suggest an interesting image generation prompt for a mona lisa variant'
)

client = Client(port=12345)  # use port from output above
response = client.post(on='/', inputs=[prompt], return_type=DocList[ImageDoc])

response[0].display()
```

![](./.github/images/mona-lisa.png)

<!-- end build-pipelines -->

### Easy scalability and concurrency

Why not just use standard Python to build that microservice and pipeline? Jina accelerates time to market of your application by making it more scalable and cloud-native. Jina also handles the infrastructure complexity in production and other Day-2 operations so that you can focus on the data application itself.

Increase your application's throughput with scalability features out of the box, like [replicas](https://docs.jina.ai/concepts/orchestration/scale-out/#replicate-executors), [shards](https://docs.jina.ai/concepts/orchestration/scale-out/#customize-polling-behaviors) and [dynamic batching](https://docs.jina.ai/concepts/serving/executor/dynamic-batching/).

Let's scale a Stable Diffusion Executor deployment with replicas and dynamic batching:

![](./.github/images/scaled-deployment.png)

* Create two replicas, with [a GPU assigned for each](https://docs.jina.ai/concepts/orchestration/scale-out/#replicate-on-multiple-gpus).
* Enable dynamic batching to process incoming parallel requests together with the same model inference.


<div class="table-wrapper">
<table>
<tr>
<th> Normal Deployment </th> 
<th> Scaled Deployment </th>
</tr>
<tr>
<td>

```yaml
jtype: Deployment
with:
  uses: TextToImage
  timeout_ready: -1
  py_modules:
    - text_to_image.py
```

</td>
<td>

```yaml
jtype: Deployment
with:
  uses: TextToImage
  timeout_ready: -1
  py_modules:
    - text_to_image.py
  env:
   CUDA_VISIBLE_DEVICES: RR
  replicas: 2
  uses_dynamic_batching: # configure dynamic batching
    /default:
      preferred_batch_size: 10
      timeout: 200
```

</td>
</tr>
</table>
</div>

Assuming your machine has two GPUs, using the scaled deployment YAML will give better throughput compared to the normal deployment.

These features apply to both [Deployment YAML](https://docs.jina.ai/concepts/orchestration/yaml-spec/#example-yaml) and [Flow YAML](https://docs.jina.ai/concepts/orchestration/yaml-spec/#example-yaml). Thanks to the YAML syntax, you can inject deployment configurations regardless of Executor code.

## Deploy to the cloud

### Containerize your Executor

In order to deploy your solutions to the cloud, you need to containerize your services. Jina provides the [Executor Hub](https://docs.jina.ai/concepts/serving/executor/hub/create-hub-executor/), the perfect tool
to streamline this process taking a lot of the troubles with you. It also lets you share these Executors publicly or privately.

You just need to structure your Executor in a folder:

```shell script
TextToImage/
├── executor.py
├── config.yml
├── requirements.txt
```
<div class="table-wrapper">
<table>
<tr>
<th> <code>config.yml</code> </th>
<th> <code>requirements.txt</code> </th>
</tr>
<tr>
<td>

```yaml
jtype: TextToImage
py_modules:
  - executor.py
metas:
  name: TextToImage
  description: Text to Image generation Executor based on StableDiffusion
  url:
  keywords: []
```

</td>
<td>

```requirements.txt
diffusers
accelerate
transformers
```

</td>
</tr>
</table>
</div>


Then push the Executor to the Hub by doing: `jina hub push TextToImage`.

This will give you a URL that you can use in your `Deployment` and `Flow` to use the pushed Executors containers.


```yaml
jtype: Flow
with:
    port: 12345
executors:
  - uses: jinai+docker://<user-id>/StableLM
  - uses: jinai+docker://<user-id>/TextToImage
```


### Get on the fast lane to cloud-native

Using Kubernetes with Jina is easy:

```bash
jina export kubernetes flow.yml ./my-k8s
kubectl apply -R -f my-k8s
```

And so is Docker Compose:

```bash
jina export docker-compose flow.yml docker-compose.yml
docker-compose up
```

> **Note**
> You can also export Deployment YAML to [Kubernetes](https://docs.jina.ai/concepts/serving/executor/serve/#serve-via-kubernetes) and [Docker Compose](https://docs.jina.ai/concepts/serving/executor/serve/#serve-via-docker-compose).

That's not all. We also support [OpenTelemetry, Prometheus, and Jaeger](https://docs.jina.ai/cloud-nativeness/opentelemetry/).

What cloud-native technology is still challenging to you? [Tell us](https://github.com/jina-ai/jina/issues) and we'll handle the complexity and make it easy for you.

### Deploy to JCloud

You can also deploy a Flow to JCloud, where you can easily enjoy autoscaling, monitoring and more with a single command. 

First, turn the `flow.yml` file into a [JCloud-compatible YAML](https://docs.jina.ai/yaml-spec/) by specifying resource requirements and using containerized Hub Executors.

Then, use `jina cloud deploy` command to deploy to the cloud:

```shell
wget https://raw.githubusercontent.com/jina-ai/jina/master/.github/getting-started/jcloud-flow.yml
jina cloud deploy jcloud-flow.yml
```

> **Warning**
>
> Make sure to delete/clean up the Flow once you are done with this tutorial to save resources and credits.

Read more about [deploying Flows to JCloud](https://docs.jina.ai/concepts/jcloud/#deploy).

<!-- start support-pitch -->

## Support

- Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas.
- Subscribe to the latest video tutorials on our [YouTube channel](https://youtube.com/c/jina-ai)

## Join Us

Jina is backed by [Jina AI](https://jina.ai) and licensed under [Apache-2.0](./LICENSE).

<!-- end support-pitch -->
